Збірники наукових праць ЦНТУ
Permanent URI for this communityhttps://dspace.kntu.kr.ua/handle/123456789/1
Browse
2 results
Search Results
Item Інформаційна безпека життєдіяльності людини і суспільства в умовах війни(ЦНТУ, 2023) Марченко, К. М.; Оришака, О. В.; Marchenko, К.; Oryshaka, О.У статті розглянуті аспекти інформаційної війни як невід'ємної складової гарячих війн. Проаналізовані механізми інформаційного впливу на свідомість людини та масову свідомість. Окреслені особливості інформаційної війни в умовах широкомасштабних військових дій в Україні. Наголошується, що інформаційна зброя приводить до сумірних наслідків із військовою зброєю та є різновидом зброї масового враження. Запропоновані інструменти та засоби протидії інформаційній агресії для людини та суспільства. The large-scale war started in Ukraine caused an intense surge of informational aggression and informational confrontation, the waves of which spread almost all over the world. Artificially prepared information is used as a weapon that works no less effectively than army weapons. The purpose of information weapons is primarily human consciousness and mass consciousness. Information security of human life and society in the conditions of intense information war becomes a priority task, as a guarantee of physical security. In the information war, which in the conditions of a flarge-scale military operations in Ukraine has become no less hot, new features and peculiarities are obvious: openness and frankness of information influences; the global nature of the information war; aggravation of information clashes, disputes, disputes; intensity of information attacks; attempts to disable military and infrastructure facilities by means of information; aggressiveness of information actions; informational violence; strict restrictions on access to information; politicization of information; significant polarization of information; an increase in the share of emotional coloring and subjective interpretation relative to the share of facts. Based on an overview of the scale of the informational impact on society and the destructive consequences, it can be argued that information is one of the weapons of mass impression. The best ways to counter informational influences and aggression are to provide true and comprehensive information; education of the population in the form of information security courses; individual trainings, especially for responsible persons; training of information security trainers; broad propaganda and development of information culture in the information society; media education - schools and information literacy courses for the population; learning the rules of information hygiene, prudence and legibility when contacting information. In particular, it is necessary to teach citizens critical thinking and recognition of negative informational influences, manipulation, misinformation, falsification, etc.Item Ризики впровадження штучного інтелекту в комп’ютерні системи(ЦНТУ, 2022) Марченко, К. М.; Оришака, О. В.; Марченко, А. К.; Мельник, А. М.; Marchenko, K.; Oryshaka, О.; Marchenko, А.; Melnick, А.У статті розглянуті питання автоматизованої обробки інформації в комп’ютерних системах. Розглянуті вимоги, яким повинні відповідати комп’ютерні системи. Підкреслюється неможливість забезпечення абсолютної надійності алгоритмів та програмного забезпечення комп’ютерні системи, отже, повної адекватності отриманих рішень. Проаналізовані сфери використання штучного інтелекту та зроблені висновки про доцільність та безпечність його впровадження в окремих галузях з точки зору безпеки життєдіяльності людей та охорони праці. Since the absolute reliability of computer systems and the results of information processes that run in them can not be guaranteed, the task of research is to identify critical areas where such errors and failures are unacceptable. The main problems with the introduction of artificial intelligence in computer systems are the inability to predict all real situations and program the behavior of the machine adequately to them, lack of reliability and software errors. The input on which artificial intelligence is taught may be incorrect. In addition, artificial intelligence systems are influenced by the way of thinking and values of its developers, who are not always familiar with psychology, sociology and other humanities. These shortcomings during the use of artificial intelligence systems have led to many incidents, including fatal. The analysis of the sample of artificial intelligence error messages allowed us to determine which areas are critical errors, ie where the use of artificial intelligence systems is associated with significant risk. In particular, these are such areas as medicine, military affairs, transport, manufacturing, where people and robotic systems cooperate, hazardous industries, energy, social management, legal institutions and more. Currently, there is no regulatory and legal framework for the use of artificial intelligence, so its implementation is spontaneous, which leads to unpredictable results and accidents. Artificial intelligence used in critical infrastructures, in areas related to human health and life, belongs to the category of high risk. Based on the analysis and due to the impossibility of ensuring the absolute reliability of computer systems and their software, the authors do not recommend the use of artificial intelligence in areas related to safety, health and human life, especially large human teams. Devices using artificial intelligence systems should be marked with messages about its use with a clear warning about the partial reliability of the device in terms of safety and consumer responsibility for the use of such a device. The authors strongly discourage the use of artificial intelligence in responsible decision-making in areas related to the security of large groups of people.