The practice of technological deception in videoconferencing systems for distance learning and ways to counter it

Authors

DOI:

https://doi.org/10.34069/AI/2021.40.04.16

Keywords:

deception, deepfake, neural networks, machine learning, creating fake videos.

Abstract

This article raises the problem of high-tech deception using video conferencing means during distance learning, which is of increased relevance due to the digitalization of the educational process, with the growth of digital literacy of young people. The article presents some methods of fraud, including a relatively new technology that is very popular: Deepfake. The article details two popular tools for replacing faces, each of which provides step-by-step instructions on how to create, configure and apply in life. The organizational methods that are presented will help teachers detect even the most disguised forgery, including future voice spoofing, and stop any attempt at deception. At the end of the article, the system is described with the help of the technologies that used to combat deepfakes, and an assessment of the danger of existing tools is given. The objectives of this article are to raise public interest in the problem of face substitution and high-tech deception in general, to create grounds for discussing the need to switch to a remote format of events, to broaden the horizons of the reader and provide him with an area for further work and research.

Downloads

Download data is not yet available.

Author Biographies

Petr A. Ukhov, Moscow Aviation Institute (National Research University), Moscow, Russia.

PhD in Technical Sciences, Associate Professor, Moscow Aviation Institute (National Research University), Moscow, Russia.

Boris A. Dmitrochenko, Moscow Aviation Institute (National Research University), Moscow, Russia.

Technician, Moscow Aviation Institute (National Research University), Moscow, Russia.

Anatoly V. Ryapukhin, Moscow Aviation Institute (National Research University), Moscow, Russia.

Senior Lecturer, Moscow Aviation Institute (National Research University), Moscow, Russia.

References

Baikinova, A. (2020). What is Deepfake and why is this technology dangerous? Informburo.kz. Retrieved from: https://informburo.kz/stati/chto-takoe-deep-fake-i-chem-opasna-eta-tehnologiya.html

Borshigov, K. (2018). Generative adversarial network (GAN). Beginner's Guide. Neurohive Retrieved at: https://neurohive.io/ru/osnovy-data-science/gan-rukovodstvo-dlja-novichkov/

Future2day (2020). Neural networks. Retrieved at: https://future2day.ru/nejronnye-seti/

Github (2020a). FaceIT_Live. Retrieved from: https://github.com/alew3/faceit_live

Github (2020b). DeepFaceLab. Retrived from: https://github.com/iperov/DeepFaceLab

Github (2020c). FaceSwap. Retrieved from: https://github.com/MarekKowalski/FaceSwap/

Github (2020d). Avatarify. Retrieved from: https://github.com/alievk/avatarify/blob/master/README.md#install

Gottfredson, M. R., & Hirschi, T. (1990). A General Theory of Crime. Stanford: Stanford University Press

HSE (2020). Experience of onlineization of exams in foreign educational organizations and systems. Retrieved from: https://ioe.hse.ru/sao_exams?fbclid=IwAR34sr_byDe4av8LkGIS5ianKYPsv1C02LEw-kjCK5I9IC-tfXjTTLwAy0s

Kireev, M. (2019). Speech to speech. Create a neural network that falsifies the voice. Retrieved from: https://xakep.ru/2019/10/03/real-time-voice-cloning/

Kumar, A. (2019). Ethics in Generative AI: Detecting Fake Faces in Videos. Towardsdatascience. Retrieved from: https://towardsdatascience.com/ethics-in-generative-ai-detecting-fake-videos-93b69fcbabc7

Lobachevsky. N.I. (2020). The basics of error-correcting coding. Nizhny Novgorod State University named after N.I. Lobachevsky. Retrieved from: http://hpc-education.unn.ru/files/5-100-Materials/7.1.1_Courses/15/??????_??.05.09.pdf

Myownconference (2019). Advantages and Disadvantages of Video Conferencing. Retrieved from: https://myownconference.com/blog/en/index.php/advantages-disadvantages-video-conferencing/

Panasenko, A. (2020). Deepfake technologies as a threat to information security. Anti-malware.ru Retrieved from: https://www.anti-malware.ru/analytics/Threats_Analysis/Deepfakes-as-a-information-security-threat

Pandasecurity (2019). Fraud with a deepfake: the dark side of artificial intelligence. Retrieved from: https://www.pandasecurity.com/mediacenter/news/deepfake-voice-fraud/

Pindrop (2018). Pindrop 2018 voice intelligence report. Retrieved from: https://www.pindrop.com/2018-voice-intelligence-report/

Proglib (2020). DeepFake Tutorial: Create Your Own Deepfake in DeepFaceLab. Retrieved from: https://proglib.io/p/deepfake-tutorial-sozdaem-sobstvennyy-dipfeyk-v-deepfacelab-2019-11-16

Purwins, H., Li, B., Virtanen, T., Schlüter, J., Chang, S. Y., & Sainath, T. (2019). Deep learning for audio signal processing. IEEE Journal of Selected Topics in Signal Processing, 13(2), 206-219.

Rojas Bahamón, M., Arbeláez-Campillo, D. F., & Prieto, J. D. (2019). The investigation as an environmental education strategy. Revista De La Universidad Del Zulia, 9(25), 89-97. Recuperado a partir de https://www.produccioncientificaluz.org/index.php/rluz/article/view/29743

Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. Proceedings of the IEEE International Conference on Computer Vision, 1-11.

Simon, H. A. (1982). Models of bounded rationality. Volume 2: Behavioural economics and business organization. Cambridge: MIT Press

Westerlund, M. (2019). The Emergence of Deepfake Technology: A Review. Technology Innovation Management Review, 9(11), 39-52.

Downloads

Published

2021-05-31

How to Cite

Ukhov, P. A., Dmitrochenko, B. A., & Ryapukhin, A. V. (2021). The practice of technological deception in videoconferencing systems for distance learning and ways to counter it. Amazonia Investiga, 10(40), 153–168. https://doi.org/10.34069/AI/2021.40.04.16

Issue

Section

Articles