AI LAW MODEL FOR ETHICAL LEGISLATION: STRATEGIC RECOMMENDATIONS FOR THE REGULATION OF ARTIFICIAL INTELLIGENCE

Authors

Oleksii Kostenko
Ph.D. (Law) Senior Researcher, Associate Professor, State Scientific Institution, Institute of Information, Security and Law National Academy of Legal Sciences of Ukraine, Ukraine
https://orcid.org/0000-0002-2131-0281

Keywords:

Artificial Intelligence Law, Ethical Regulation, AI Governance, Interdisciplinary Approach, Legal Framework, Human-Centered AI, Transparency, Accountability, Non-Discrimination, Cyber Resilience, Risk Assessment, Public Trust, International Standards, Socio-Cultural Impact, Policy Recommendations, Preventive Regulation, Adaptive Regulation, AI Ethics, Technological Innovation, Legal Gaps

Synopsis

This study — the AI Law Model for Ethical Law: Strategic Recommendations for the Regulation of Artificial Intelligence — was created as a conscious and purposeful scientific paradigm for systemic flaws and hidden vulnerabilities identified in existing and draft regulations of the USA, EU, PRC and other leading jurisdictions in the field of artificial intelligence.
A comprehensive analysis of international practice shows that even the most ambitious and technologically advanced AI acts leave significant gaps — from vague and contradictory definitions of basic concepts, insufficiently elaborated and fragmented ethical requirements, to the complete absence or insufficiency of a deep assessment of the socio-psychological and cultural consequences of AI implementation. Such shortcomings and regulatory gaps can lead to serious legal conflicts, ethical crises, increased social tensions, as well as to the loss or undermining of public trust in digital institutions and public policy in general.
The purpose of the project "Model AI Law for Ethical Law: Strategic Recommendations for the Regulation of Artificial Intelligence" is not only to warn against mechanical and little critical copying of foreign regulatory models, but also to form a thorough approach adapted to the realities of national jurisdictions, based on deep interdisciplinary expertise and comprehensive risk analysis. This approach involves considering the legal, technical, ethical, social, cultural, and medical aspects of AI implementation, which makes it possible to consider LLM technology as a multidimensional phenomenon with long-term consequences. The key idea is that AI laws cannot be the product of the work of narrow-profile initiators who are far from a systematic understanding of the technology, or authors specializing exclusively in one field (law, technology, medicine, etc.) without proper involvement of related fields. Only a broad association of interdisciplinary lawyers, highly qualified technical specialists, sociologists, psychologists, doctors, ethicists, cybersecurity and risk management specialists as part of integrated research groups provides a real opportunity to cover the entire range of direct and indirect risks, as well as to predict the long-term social, economic and political consequences of the development and use of AI.
The structure of the study provides not only a formal statement of provisions, but also sections that define and disclose in detail the fundamental principles of human-centeredness, transparency, accountability, non-discrimination and cyber resilience, taking into account international standards and scientific approaches. At the same time, these sections are combined with analytical blocks, which systematically describe problem areas identified in other countries, provide examples of negative consequences of ignoring these principles, and formulate recommendations for avoiding such risks in Ukrainian legislation. In particular, the following critical aspects are emphasized:
• legal gaps and lack of clearly defined enforcement mechanisms that allow individual actors to avoid or minimize liability for the direct or indirect harmful consequences of the use of AI, including economic, ethical and social losses;
• the lack of a single, scientifically based and normatively fixed risk assessment scale, which complicates the objective identification of the level of danger of specific AI solutions and creates opportunities for conscious or unintentional manipulations in the classification of systems, in particular in order to avoid regulatory requirements or reduce the scope of verification measures;
• lack of effective, transparent and legally enshrined mechanisms of public control and independent audit, capable of ensuring systematic monitoring, objective assessment and public information about the compliance of AI systems with ethical and legal standards;
• ignoring a deep and comprehensive analysis of the psychological, emotional, and sociocultural impacts of AI on vulnerable populations, including children, the elderly, people with disabilities, and other socially sensitive categories, which can lead to deepening inequality, social exclusion, and negative transformations of cultural values;
• weak integration and insufficient regulatory consolidation of internationally recognized technical standards and protocols into legal regulation, which complicates the unification of requirements, reduces the compatibility of national solutions with global ecosystems, and creates risks of technical and legal fragmentation.
Thus, this draft law is not only a draft law in the traditional sense, but also a deeply thought-out analytical and regulatory warning designed to become a conceptual guide for future state policy in the field of artificial intelligence. Its task is to incorporate a multi-level preventive mechanism into AI legislation that would combine preventive, adaptive, and corrective regulatory tools that can flexibly respond to new technological challenges and ethical dilemmas. The implementation of this approach should guarantee not only a formal balance between legal regulation, technological innovation and public security, but also ensure the stability of this balance in the context of rapid transformations and global competition, avoiding strategic and tactical mistakes already made by world leaders in this area.

References

Stahl, B., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., & Wright, D. (2023). A systematic review of artificial intelligence impact assessments. Artificial Intelligence Review, 1 - 33. https://doi.org/10.1007/s10462-023-10420-8.

Moss, E., Watkins, E., Singh, R., Eilish, M., & Metcalfe, J. (2021). Building accountability: algorithmic assessment of impact in the public interest. Electronic journal SSRN . https://doi.org/10.2139/ssrn.3877437 .

Harris, S. (2020). Data Protection Impact Assessments as rule of law governance mechanisms. Data & Policy, 2. https://doi.org/10.1017/dap.2020.3.

Mantelero, A. (2018). AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment. Feminist Methodology & Research eJournal. https://doi.org/10.1016/J.CLSR.2018.05.017.

Calvi, A. (2024). Data Protection Impact Assessment under the EU General Data Protection Regulation: A feminist reflection. Comput. Law Secur. Rev., 53, 105950. https://doi.org/10.1016/j.clsr.2024.105950.

Mantelero, A. (2024). The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template. ArXiv, abs/2411.15149. https://doi.org/10.1016/j.clsr.2024.106020.

Rintamaki, T., & Pandit, H. (2024). Developing an Ontology for AI Act Fundamental Rights Impact Assessments. ArXiv, abs/2501.10391. https://doi.org/10.48550/arXiv.2501.10391.

Rintamaki, T., & Pandit, H. (2024). Towards An Automated AI Act FRIA Tool That Can Reuse GDPR's DPIA. ArXiv, abs/2501.14756. https://doi.org/10.48550/arXiv.2501.14756.

McGinn, R. (1990). Science, Technology and Society. .

Bhaskar, V., & Kumar, G. (2024). SCIENCE TECHNOLOGY AND MODERN SOCIETY. GLOBAL JOURNAL FOR RESEARCH ANALYSIS. https://doi.org/10.36106/gjra/7103658.

(2019). The Delphi Method. New Teaching Resources for Management in a Globalised World. https://doi.org/10.1142/9789811206542_0011.

Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique.. Journal of advanced nursing, 32 4, 1008-15 . https://doi.org/10.1046/J.1365-2648.2000.01567.X.

Zimmermann, H. (1980). OSI Reference Model - The ISO Model of Architecture for Open Systems Interconnection. IEEE Transactions on Communications, 28, 425-432. https://doi.org/10.1109/TCOM.1980.1094702.

Dromard, F. (1984). A guide to open systems interconnection. Computers and Standards, 3, 171-193. https://doi.org/10.1016/0167-8051(84)90006-8.

Abend, F. (2016). Open Systems Interconnection Handbook. .

Zhuk, A. (2024). Navigating the legal landscape of AI copyright: a comparative analysis of EU, US, and Chinese approaches. AI Ethics, 4, 1299-1306. https://doi.org/10.1007/s43681-023-00299-0.

Radanliev, P. (2025). Frontier AI regulation: what form should it take?. Frontiers in Political Science. https://doi.org/10.3389/fpos.2025.1561776.

Widder, D., & Nafus, D. (2022). Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society, 10. https://doi.org/10.1177/20539517231177620.

Cobbe, J., Wil, M., & Singh, J. (2023). Understanding Accountability in Algorithmic Supply Chains. Proceedings of the 2023 ACM Conference on Equity, Accountability and Transparency . https://doi.org/10.1145/3593013.3594073 .

Imagawa, K., Mizukami, Y., & Miyazaki, S. (2018). Regulatory convergence of medical devices: a case study using ISO and IEC standards. Expert Review of Medical Devices, 15, 497 - 504. https://doi.org/10.1080/17434440.2018.1492376.

Sheffner, D. (2019). Integrating Technical Standards into Federal Regulations: Incorporation by Reference. The Cambridge Handbook of Technical Standardization Law. https://doi.org/10.1017/9781316416785.007.

Rodríguez, N., Ser, J., Coeckelbergh, M., De Prado, M., Herrera-Viedma, E., & Herrera, F. (2023). Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation. Inf. Fusion, 99, 101896. https://doi.org/10.48550/arXiv.2305.02231.

Truby, J., Brown, R., Ibrahim, I., & Parellada, O. (2021). A Sandbox Approach to Regulating High-Risk Artificial Intelligence Applications. European Journal of Risk Regulation, 13, 270 - 294. https://doi.org/10.1017/err.2021.52.

Kostenko, O., & Golovko, O. (2023). Metaverse electronic jurisdiction: challenges and risks of legal regulation of virtual reality. INFORMATION AND LAW. https://doi.org/10.37750/2616-6798.2023.1(44).287729.

O., Kostenko. (2022). ELECTRONIC JURISDICTION, METAVERSE, ARTIFICIAL INTELLIGENCE, DIGITAL PERSONALITY, DIGITAL AVATAR, NEURAL NETWORKS: THEORY, PRACTICE, PERSPECTIVE. World Science. https://doi.org/10.31435/rsglobal_ws/30012022/7751.

Mendes, P. (2023). Model‐based risk analysis for system design. Systems Engineering, 27, 20 - 5. https://doi.org/10.1002/sys.21704.

Janssen, H., Lee, M., & Singh, J. (2022). Practical fundamental rights impact assessments. Int. J. Law Inf. Technol., 30, 200-232. https://doi.org/10.1093/ijlit/eaac018.

Regulations on the Management of Algorithmic Recommendations in Internet Information Services https://www.cac.gov.cn/2022-01/04/c_1642894606364259.htm

Personal Information Protection Law of the People’s Republic of China https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peoples-republic-of-china-effective-nov-1-2021/

Outeda, C. (2024). The EU's AI act: A framework for collaborative governance. Internet Things, 27, 101291. https://doi.org/10.1016/j.iot.2024.101291.

Pehlivan, C. (2024). The EU Artificial Intelligence (AI) Act: An Introduction. Global Privacy Law Review. https://doi.org/10.54648/gplr2024004.

Minh, L. (2024). Eu Ai Act and Its Relationship with Vietnamese Lawin Creating a Legal Policy for Ai Regulation. International Journal of Religion. https://doi.org/10.61707/jp3pks38.

Hoffmeister, K. (2024). The Dawn of Regulated AI: Analyzing the European AI Act and its Global Impact. Zeitschrift für europarechtliche Studien. https://doi.org/10.5771/1435-439x-2024-2-182.

Gilbert, S. (2024). The EU passes the AI Act and its implications for digital medicine are unclear. NPJ Digital Medicine, 7. https://doi.org/10.1038/s41746-024-01116-6.

Lewis, D., Lasek-Markey, M., Golpayegani, D., & Pandit, H. (2025). Mapping the Regulatory Learning Space for the EU AI Act. ArXiv, abs/2503.05787. https://doi.org/10.48550/arXiv.2503.05787.

Ukoh, D., & Adetunji, M. (2025). AI Act: The EU Regulation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4607388.

Minh, L. (2024). EU Artificial Intelligence Law and Its Relationship with Vietnamese Legislation in the Field of Creating Legal Policies for the Regulation of Artificial Intelligence. International Journal of Religion . https://doi.org/10.61707/jp3pks38 .

Birchfield, V. (2024). From roadmap to regulation: will there be a transatlantic approach to governing artificial intelligence?. Journal of European Integration, 46, 1053 - 1071. https://doi.org/10.1080/07036337.2024.2407571.

Kuzior, A. (2024). Navigating AI Regulation: A Comparative Analysis of EU and US Legal Frameworks. Materials Research Proceedings. https://doi.org/10.21741/9781644903315-30.

Khassanay, A., & Tifine, P. (2025). THROUGH THE LENS OF THE LAW: HOW CHINA AND THE EUROPEAN UNION ARE SHAPING THE FUTURE OF ARTIFICIAL INTELLIGENCE. Bulletin of KazNPU named after Abai series "Jurisprudence". https://doi.org/10.51889/2959-6181.2024.78.4.005.

Bal, R., & Gill, I. (2020). Policy Approaches to Artificial Intelligence Based Technologies in China, European Union and the United States. . https://doi.org/10.2139/ssrn.3699640.

Arshad, N., Butt, T., & Iqbal, M. (2025). A Comprehensive Framework for Intelligent, Scalable, and Performance-Optimized Software Development. IEEE Access, 13, 74062-74077. https://doi.org/10.1109/ACCESS.2025.3564139.

Kulkarni, N. (2024). Role of AI in Application Life Cycle Management (ALM). Journal of Artificial Intelligence & Cloud Computing. https://doi.org/10.47363/jaicc/2024(3)397.

Laato, S., Birkstedt, T., Mäntymäki, M., Minkkinen, M., & Mikkonen, T. (2022). AI Governance in the System Development Life Cycle: Insights on Responsible Machine Learning Engineering. 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), 113-123. https://doi.org/10.1145/3522664.3528598.

De Silva, D., & Alahakoon, D. (2021). An artificial intelligence life cycle: From conception to production. Patterns, 3. https://doi.org/10.1016/j.patter.2022.100489.

Shahriar, S., Allana, S., Hazratifard, S., & Dara, R. (2023). A Survey of Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life Cycle. IEEE Access, 11, 61829-61854. https://doi.org/10.1109/ACCESS.2023.3287195.

Harvey, B., & Gowda, V. (2020). How the FDA Regulates AI.. Academic radiology, 27 1, 58-61 . https://doi.org/10.1016/j.acra.2019.09.017.

Zhao, S., Blaabjerg, F., & Wang, H. (2020). An Overview of Artificial Intelligence Applications for Power Electronics. IEEE Transactions on Power Electronics, 36, 4633-4658. https://doi.org/10.1109/TPEL.2020.3024914.

Amariles, D., & Baquero, P. (2023). Promises and limits of law for a human-centric artificial intelligence. Comput. Law Secur. Rev., 48, 105795. https://doi.org/10.1016/j.clsr.2023.105795.

Azzutti, A., Ringe, W., & Stiehl, H. (2022). The Regulation of AI Trading from an AI Life Cycle Perspective. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4260423.

Bassey, K., Juliet, A., & Stephen, A. (2024). AI-Enhanced lifecycle assessment of renewable energy systems. Engineering Science & Technology Journal. https://doi.org/10.51594/estj.v5i7.1254.

Rangone, N. (2023). Artificial intelligence challenging core State functions. Revista de Derecho Público: Teoría y método. https://doi.org/10.37417/rdp/vol_8_2023_1949.

Brey, P., & Dainow, B. (2023). Ethics by design for artificial intelligence. AI Ethics, 4, 1265-1277. https://doi.org/10.1007/s43681-023-00330-4.

d’Aquin, M., Troullinou, P., O'Connor, N., Cullen, A., Faller, G., & Holden, L. (2018). Towards an "Ethics by Design" Methodology for AI Research Projects. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3278721.3278765.

Iphofen, R., & Kritikos, M. (2019). Regulating artificial intelligence and robotics: ethics by design in a digital society. Contemporary Social Science, 16, 170 - 184. https://doi.org/10.1080/21582041.2018.1563803.

Gerdes, A. (2021). A participatory data-centric approach to AI Ethics by Design. Applied Artificial Intelligence, 36. https://doi.org/10.1080/08839514.2021.2009222.

Coeckelbergh, M. (2019). Artificial Intelligence: Some ethical issues and regulatory challenges. , 2019, 31-34. https://doi.org/10.26116/TECHREG.2019.003.

Perperidis, G. (2024). Designing Ethical A.I. Under the Current Socio-Economic Milieu: Philosophical, Political and Economic Challenges of Ethics by Design for A.I.. Philosophy & Technology. https://doi.org/10.1007/s13347-024-00766-4.

AI Risk Management Framework http://bit.ly/3LjMnvO.

Artificial Intelligence Act (AI ACT) https://oecd.ai/en/dashboards/policy-initiatives/artificial-intelligence-act-ai-act-9517

Recommendation on the Ethics of Artificial Intelligence https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

ISO/IEC 22989:2022 - Information technology - Artificial intelligence - Artificial intelligence concepts and terminology https://oecd.ai/en/catalogue/tools/isoiec-229892022-information-technology-artificial-intelligence-artificial-intelligence-concepts-and-terminology

Craig, C. (2021). The Relational Robot: A Normative Lens for AI Legal Neutrality (Reviewing Ryan Abbott, The Reasonable Robot, Cambridge University Press, 2020). SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4118849.

Craig, C. (2021). The AI-Copyright Challenge: Tech-Neutrality, Authorship, and the Public Interest. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4014811.

Thiebes, S., Lins, S., & Sunyaev, A. (2020). Trustworthy artificial intelligence. Electronic Markets, 31, 447 - 464. https://doi.org/10.1007/s12525-020-00441-4.

Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. Int. J. Inf. Manag., 62, 102433. https://doi.org/10.1016/j.ijinfomgt.2021.102433.

O’Connor, S., & Liu, H. (2023). Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities. AI & SOCIETY, 1-13. https://doi.org/10.1007/s00146-023-01675-4.

Barmer, H., Zombak, R., Gaston, M., Palat, W., Redner, F., & Smith, K. (2021). Human-centric AI. IEEE Pervasive Comput. , 22, 7-8. https://doi.org/10.1184/R1/16560183.V1 .

Bingley, W., Haslam, S., Steffens, N., Gillespie, N., Worthy, P., Curtis, C., Lockey, S., Bialkowski, A., Ko, R., & Wiles, J. (2023). Enlarging the model of the human at the heart of human-centered AI: A social self-determination model of AI system impact. New Ideas in Psychology. https://doi.org/10.1016/j.newideapsych.2023.101025.

Wang, D., Maes, P., Ren, X., Shneiderman, B., Shi, Y., & Wang, Q. (2021). Designing AI to Work WITH or FOR People?. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3411763.3450394.

Bingley, W., Curtis, C., Lockey, S., Bialkowski, A., Gillespie, N., Haslam, S., Ko, R., Steffens, N., Wiles, J., & Worthy, P. (2022). Where is the human in human-centered AI? Insights from developer priorities and user experiences. Comput. Hum. Behav., 141, 107617. https://doi.org/10.1016/j.chb.2022.107617.

Rong, Y., Leemann, T., Nguyen, T., Fiedler, L., Qian, P., Unhelkar, V., Seidel, T., Kasneci, G., & Kasneci, E. (2022). Towards Human-Centered Explainable AI: A Survey of User Studies for Model Explanations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46, 2104-2122. https://doi.org/10.1109/TPAMI.2023.3331846.

Zhang, X., Chan, F., Yan, C., & Bose, I. (2022). Towards risk-aware artificial intelligence and machine learning systems: An overview. Decis. Support Syst., 159, 113800. https://doi.org/10.1016/j.dss.2022.113800.

Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., & Li, W. (2021). Artificial Intelligence Security: Threats and Countermeasures. ACM Computing Surveys (CSUR), 55, 1 - 36. https://doi.org/10.1145/3487890.

King, T., Aggarwal, N., Taddeo, M., & Floridi, L. (2019). Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Science and Engineering Ethics, 26, 89 - 120. https://doi.org/10.1007/s11948-018-00081-0.

Roberts, H. (2024). Digital sovereignty and artificial intelligence: a normative approach. Ethics Inf. Technol., 26, 70. https://doi.org/10.1007/s10676-024-09810-5.

Calderaro, A., & Blumfelde, S. (2022). Artificial intelligence and EU security: the false promise of digital sovereignty. European Security, 31, 415 - 434. https://doi.org/10.1080/09662839.2022.2101885.

Kolianov, A. (2022). Artificial Intelligence as a Strategic Component of Technological Sovereignty. Discourse. https://doi.org/10.32603/2412-8562-2022-8-5-81-90.

Dokumacı, M. (2024). Legal Frameworks for AI Regulations. Human Computer Interaction. https://doi.org/10.62802/ytst2927.

Eliot, L. (2021). Interoperability Of AI-to-AI Multi-Lawyering. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3990307.

Ren, Q., & Du, J. (2024). Harmonizing innovation and regulation: The EU Artificial Intelligence Act in the international trade context. Comput. Law Secur. Rev., 54, 106028. https://doi.org/10.1016/j.clsr.2024.106028.

Kurmangali, M. (2024). Navigating the Frontiers of Digital Diplomacy: Multilateral Cooperation on Artificial Intelligence Regulation at UN and EU Levels. Journal "International Relations and International Law" . https://doi.org/10.26577/irilj.2024.v105.i1.09 .

Radanliev, P. (2024). Cyber diplomacy: defining the opportunities for cybersecurity and risks from Artificial Intelligence, IoT, Blockchains, and Quantum Computing. Journal of Cyber Security Technology, 9, 28 - 78. https://doi.org/10.1080/23742917.2024.2312671.

Bubashait, F. (2025). The emerging role of AI technologies in supporting digital diplomacy and shaping international relations. International Journal for Scientific Research. https://doi.org/10.59992/ijsr.2025.v4n2p2.

Stoltz, M. (2024). Artificial Intelligence in Cybersecurity: Building Resilient Cyber Diplomacy Frameworks. ArXiv, abs/2411.13585. https://doi.org/10.48550/arXiv.2411.13585.

De Almeida, P., Santos, C., & Farias, J. (2021). Regulating Artificial Intelligence: A Framework for Governance. Ethics and Information Technology , 23, 505 - 525. https://doi.org/10.1007/s10676-021-09593-z .

Chauhan, S., Sharma, N., & Kumar, R. (2025). Comparative Analysis of Artificial Intelligence and Diplomacy: Transforming Democratic Governance. Journal of Informatics Education and Research. https://doi.org/10.52783/jier.v5i1.2293.

Neuwirth, R. (2023). Prohibited artificial intelligence practices in the proposed EU artificial intelligence act (AIA). Comput. Law Secur. Rev., 48, 105798. https://doi.org/10.1016/j.clsr.2023.105798.

Neuwirth, R. (2022). Prohibited Artificial Intelligence Practices in the Proposed EU Artificial Intelligence Act. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4261569.

Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2021). Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds and Machines, 32, 241 - 268. https://doi.org/10.1007/s11023-021-09577-4.

Stettinger, G., Weissensteiner, P., & Khastgir, S. (2024). Trustworthiness Assurance Assessment for High-Risk AI-Based Systems. IEEE Access, 12, 22718-22745. https://doi.org/10.1109/ACCESS.2024.3364387.

Hupont, I., Micheli, M., Delipetrev, B., Gómez, E., & Garrido, J. (2023). Documenting High-Risk AI: A European Regulatory Perspective. Computer, 56, 18-27. https://doi.org/10.1109/MC.2023.3235712.

Manchev, A. (2024). WORLD’S FIRST LAW FOR ARTIFICIAL INTELLIGENCE. LEGAL, ETHICAL AND ECONOMIC ASPECTS. Education and Technologies Journal. https://doi.org/10.26883/2010.241.5985.

Mezgár, I., & Váncza, J. (2022). From ethics to standards - A path via responsible AI to cyber-physical production systems. Annu. Rev. Control., 53, 391-404. https://doi.org/10.1016/j.arcontrol.2022.04.002.

Hoseini, F. (2023). AI Ethics: A Call for Global Standards in Technology Development. AI and Tech in Behavioral and Social Sciences. https://doi.org/10.61838/kman.aitech.1.4.1.

Jedličková, A. (2024). Ensuring Ethical Standards in the Development of Autonomous and Intelligent Systems. IEEE Transactions on Artificial Intelligence, 5, 5863-5872. https://doi.org/10.1109/TAI.2024.3387403.

Sengar, S., Hasan, A., Kumar, S., & Carroll, F. (2024). Generative Artificial Intelligence: A Systematic Review and Applications. ArXiv, abs/2405.11029. https://doi.org/10.48550/arXiv.2405.11029.

Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33, 1-17. https://doi.org/10.1007/s12525-023-00680-1.

Batchu, C., & Satya, V. (2024). Generative AI: Evolution and its Future. International Journal For Multidisciplinary Research. https://doi.org/10.36948/ijfmr.2024.v06i01.12046.

Zhang, P., & Boulos, M. (2023). Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges. Future Internet, 15, 286. https://doi.org/10.3390/fi15090286.

Sedkaoui, S., & Benaichouba, R. (2024). Generative AI as a transformative force for innovation: a review of opportunities, applications and challenges. European Journal of Innovation Management. https://doi.org/10.1108/ejim-02-2024-0129.

Riemer, K., & Peter, S. (2024). Conceptualizing generative AI as style engines: Application archetypes and implications. Int. J. Inf. Manag., 79, 102824. https://doi.org/10.1016/j.ijinfomgt.2024.102824.

Koohi-Moghadam, M., & Bae, K. (2023). Generative AI in Medical Imaging: Applications, Challenges, and Ethics. Journal of Medical Systems, 47, 1-4. https://doi.org/10.1007/s10916-023-01987-4.

Gozalo-Brizuela, R., & Garrido-Merch'an, E. (2023). A survey of Generative AI Applications. ArXiv, abs/2306.02781. https://doi.org/10.48550/arXiv.2306.02781.

M., Sharma, P., & Bhardwaj, A. (2025). Exploring the Capabilities and Limitations of Generative AI Applications, Challenges, and Future Directions. 2025 International Conference on Pervasive Computational Technologies (ICPCT), 24-29. https://doi.org/10.1109/ICPCT64145.2025.10940335.

Saeed, W., & Omlin, C. (2021). Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities. ArXiv, abs/2111.06420. https://doi.org/10.1016/j.knosys.2023.110273.

Haque, A., Islam, A., & Mikalef, P. (2022). Explainable Artificial Intelligence (XAI) from a user perspective- A synthesis of prior literature and problematizing avenues for future research. ArXiv, abs/2211.15343. https://doi.org/10.1016/j.techfore.2022.122120.

Vilone, G., & Longo, L. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion, 76, 89-106. https://doi.org/10.1016/J.INFFUS.2021.05.009.

Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., Schlötterer, J., Keulen, M., & Seifert, C. (2022). From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Computing Surveys, 55, 1 - 42. https://doi.org/10.1145/3583558.

Kale, A., Nguyen, T., Harris, F., Li, C., Zhang, J., & , X. (2022). Provenance documentation to enable explainable and trustworthy AI: A literature review. Data Intelligence, 5, 139-162. https://doi.org/10.1162/dint_a_00119.

Laato, S., Tiainen, M., Islam, N., & Mäntymäki, M. (2022). How to explain AI systems to end users: a systematic literature review and research agenda. Internet Res., 32, 1-31. https://doi.org/10.1108/intr-08-2021-0600.

Nunes, I., & Jannach, D. (2017). A systematic review and taxonomy of explanations in decision support and recommender systems. User Modeling and User-Adapted Interaction, 27, 393-444. https://doi.org/10.1007/s11257-017-9195-0.

Kale, A., Nguyen, T., Harris, F., Li, C., Zhang, J., &, X. (2022). Documentation of origin to enable explainable and reliable artificial intelligence: a literature review. Data Intelligence , 5, 139-162. https://doi.org/10.1162/dint_a_00119.

Ghasemi, A., Hashtarkhani, S., Schwartz, D., & Shaban-Nejad, A. (2024). Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review. Cancer Innovation, 3. https://doi.org/10.1002/cai2.136.

Chou, Y., Moreira, C., Bruza, P., Ouyang, C., & Jorge, J. (2021). Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications. Inf. Fusion, 81, 59-83. https://doi.org/10.1016/j.inffus.2021.11.003.

Ali, S., Akhlaq, F., Imran, A., Kastrati, Z., Daudpota, S., & Moosa, M. (2023). The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review. Computers in biology and medicine, 166, 107555 . https://doi.org/10.1016/j.compbiomed.2023.107555.

Laux, J., Wachter, S., & Mittelstadt, B. (2023). Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance, 18, 3 - 32. https://doi.org/10.1111/rego.12512.

Raimundo, R., & Rosário, A. (2021). The Impact of Artificial Intelligence on Data System Security: A Literature Review. Sensors (Basel, Switzerland), 21. https://doi.org/10.3390/s21217029.

Al-Kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: An Interdisciplinary Perspective. Informatics, 11, 58. https://doi.org/10.3390/informatics11030058.

Gao, Y., Wang, Y., Wang, R., Wang, X., Sun, Y., Ding, Y., Xu, H., Chen, Y., Zhao, Y., Huang, H., Li, Y., Zhang, J., Zheng, X., Bai, Y., Ding, H., Wu, Z., Qiu, X., Zhang, J., Li, Y., Sun, J., Wang, C., Gu, J., Wu, B., Chen, S., Zhang, T., Liu, Y., Gong, M., Liu, T., Pan, S., Xie, C., Pang, T., Dong, Y., Jia, R., Zhang, Y., , S., Zhang, X., Gong, N., Xiao, C., Erfani, S., Li, B., Sugiyama, M., Tao, D., Bailey, J., & Jiang, Y. (2025). Safety at Scale: A Comprehensive Survey of Large Model Safety. ArXiv, abs/2502.05206. https://doi.org/10.48550/arXiv.2502.05206.

Wäschle, M., Thaler, F., Berres, A., Pölzlbauer, F., & Albers, A. (2022). A review on AI Safety in highly automated driving. Frontiers in Artificial Intelligence, 5. https://doi.org/10.3389/frai.2022.952773.

De Micco, F., Di Palma, G., Ferorelli, D., De Benedictis, A., Tomassini, L., Tambone, V., Cingolani, M., & Scendoni, R. (2025). Artificial intelligence in healthcare: transforming patient safety with intelligent systems—A systematic review. Frontiers in Medicine, 11. https://doi.org/10.3389/fmed.2024.1522554.

Kuznietsov, A., Gyevnar, B., Wang, C., Peters, S., & Albrecht, S. (2024). Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review. IEEE Transactions on Intelligent Transportation Systems, 25, 19342-19364. https://doi.org/10.1109/TITS.2024.3474469.

Salhab, W., Ameyed, D., Jaafar, F., & Mcheick, H. (2024). A Systematic Literature Review on AI Safety: Identifying Trends, Challenges, and Future Directions. IEEE Access, 12, 131762-131784. https://doi.org/10.1109/ACCESS.2024.3440647.

Maurya, A., & Kumar, D. (2020). Reliability of safety‐critical systems: A state‐of‐the‐art review. Quality and Reliability Engineering International, 36, 2547 - 2568. https://doi.org/10.1002/qre.2715.

Wang, Y., & Chung, S. (2021). Artificial intelligence in safety-critical systems: a systematic review. Ind. Manag. Data Syst., 122, 442-470. https://doi.org/10.1108/imds-07-2021-0419.

Neto, A., Camargo, J., Almeida, J., & Cugnasca, P. (2022). Safety Assurance of Artificial Intelligence-Based Systems: A Systematic Literature Review on the State of the Art and Guidelines for Future Work. IEEE Access, 10, 130733-130770. https://doi.org/10.1109/ACCESS.2022.3229233.

Mart'inez-Fern'andez, S., Bogner, J., Franch, X., Oriol, M., Siebert, J., Trendowicz, A., Vollmer, A., & Wagner, S. (2021). Software Engineering for AI-Based Systems: A Survey. ACM Transactions on Software Engineering and Methodology (TOSEM), 31, 1 - 59. https://doi.org/10.1145/3487043.

Ismatullaev, U., & Kim, S. (2022). Review of the Factors Affecting Acceptance of AI-Infused Systems. Human Factors, 66, 126 - 144. https://doi.org/10.1177/00187208211064707.

Marjanovic, O., Cecez-Kecmanovic, D., & Vidgen, R. (2021). Theorising Algorithmic Justice. European Journal of Information Systems, 31, 269 - 287. https://doi.org/10.1080/0960085X.2021.1934130.

Pfeiffer, J., Gutschow, J., Haas, C., Möslein, F., Maspfuhl, O., Borgers, F., & Alpsancar, S. (2023). Algorithmic Fairness in AI. Business & Information Systems Engineering, 65, 209-222. https://doi.org/10.1007/s12599-023-00787-x.

Gabriel, I. (2021). Toward a Theory of Justice for Artificial Intelligence. Daedalus, 151, 218-231. https://doi.org/10.1162/daed_a_01911.

Halsband, A. (2022). Sustainable AI and Intergenerational Justice. Sustainability. https://doi.org/10.3390/su14073922.

Bellamy, R., Mojsilovic, A., Nagar, S., Ramamurthy, K., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K., Zhang, Y., Dey, K., Hind, M., Hoffman, S., Houde, S., Kannan, K., Lohia, P., Martino, J., & Mehta, S. (2019). AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development. https://doi.org/10.1147/jrd.2019.2942287.

Leben, D. (2025). AI Fairness. . https://doi.org/10.7551/mitpress/15740.001.0001.

Acikgoz, Y., Davison, K., Compagnone, M., & Laske, M. (2020). Justice Perceptions of Artificial Intelligence in Selection. Decision Making. https://doi.org/10.1111/ijsa.12306.

Yalcin, G., Themeli, E., Stamhuis, E., Philipsen, S., & Puntoni, S. (2022). Perceptions of Justice By Algorithms. Artificial Intelligence and Law, 31, 269 - 292. https://doi.org/10.1007/s10506-022-09312-z.

Graham, S., & Hopkins, H. (2021). AI for Social Justice: New Methodological Horizons in Technical Communication. Technical Communication Quarterly, 31, 89 - 102. https://doi.org/10.1080/10572252.2021.1955151.

Noorman, M., Apráez, B., & Lavrijssen, S. (2023). AI and Energy Justice. Energies. https://doi.org/10.3390/en16052110.

Polo, E., & Ailodion, D. (2025). Tackling Racial Bias in AI Systems: Applying the Bioethical Principle of Justice and Insights from Joy Buolamwini’s “Coded Bias” and the “Algorithmic Justice League”. Bangladesh Journal of Bioethics. https://doi.org/10.62865/bjbio.v16i1.129.

Zhang, X., Antwi-Afari, M., Zhang, Y., & Xing, X. (2024). The Impact of Artificial Intelligence on Organizational Justice and Project Performance: A Systematic Literature and Science Mapping Review. Buildings. https://doi.org/10.3390/buildings14010259.

Buccella, A. (2022). “AI for all” is a matter of social justice. Ai and Ethics, 1 - 10. https://doi.org/10.1007/s43681-022-00222-z.

Chen, J., Yan, H., Liu, Z., Zhang, M., Xiong, H., & Yu, S. (2024). When Federated Learning Meets Privacy-Preserving Computation. ACM Computing Surveys, 56, 1 - 36. https://doi.org/10.1145/3679013.

Guo, J., Pietzuch, P., Paverd, A., & Vaswani, K. (2024). Trustworthy AI using Confidential Federated Learning. Queue, 22, 87 - 107. https://doi.org/10.1145/3665220.

Kakarala, M., & Rongali, S. (2025). Data Privacy and Security in AI. World Journal of Advanced Research and Reviews. https://doi.org/10.30574/wjarr.2025.25.3.0555.

Yu, S., Carroll, F., & Bentley, B. (2024). Insights Into Privacy Protection Research in AI. IEEE Access, 12, 41704-41726. https://doi.org/10.1109/ACCESS.2024.3378126.

Lee, D., Antonio, J., & Khan, H. (2024). Privacy-Preserving Decentralized AI with Confidential Computing. ArXiv, abs/2410.13752. https://doi.org/10.48550/arXiv.2410.13752.

Guntupalli, N. (2023). Artificial Intelligence as a Service: Providing Integrity and Confidentiality. , 309-315. https://doi.org/10.1007/978-3-031-36402-0_28.

Vaswani, K., Volos, S., Fournet, C., Diaz, A., Gordon, K., Vembu, B., Webster, S., Chisnall, D., Kulkarni, S., Cunningham, G., Osborne, R., & Wilkinson, D. (2023). Confidential Computing within an AI Accelerator. , 501-518.

Searle, R., & Gururaj, P. (2022). Establishing security and trust for object detection and classification with confidential AI. , 12113, 121130C - 121130C-14. https://doi.org/10.1117/12.2618303.

Kostenko O. V. Management of Identification Data: Legal Regulation of Anonymization and Pseudonymization. Scientific Bulletin of Public and Private Law. 2021. № 1. Р. 76-81. DOI: https://doi.org/10.32844/2618-1258.2021.1.13

Kostenko O.V. Legal Regulation of Identity Data Management: UNCITRAL, Cross-Border Trust Space. Law. State. Technology. 2021. № 4. С. 56-60. DOI: https://doi.org/10.32782/LST/2021-4-10

Костенко О. IDENTIFICATION DATA MANAGEMENT: LEGAL REGULATION AND CLASSIFICATION. Scientific Journal of Polonia University. 2021. Vol. 43. №6. P.198-203. DOI: https://doi.org/10.23856/4325

Kostenko O. V. Management of Identification Data: Legal Regulation and Classification. Young scientist. 2021. № 3(91). Pp. 90-94. DOI: https://doi.org/10.32839/2304-5809/2021-3-91-21

Buijsman, S. (2024). Transparency for AI systems: a value-based approach. Ethics Inf. Technol., 26, 34. https://doi.org/10.1007/s10676-024-09770-w.

Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy Rev., 9. https://doi.org/10.14763/2020.2.1469.

Ehsan, U., Liao, Q., Muller, M., Riedl, M., & Weisz, J. (2021). Expanding Explainability: Towards Social Transparency in AI systems. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3411764.3445188.

Hunter, A. (2023). Interactions with AI Systems: Trust and Transparency. 2023 IEEE Engineering Informatics, 1-6. https://doi.org/10.1109/IEEECONF58110.2023.10520626.

Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K., & Kujala, S. (2023). Transparency and explainability of AI systems: From ethical guidelines to requirements. Inf. Softw. Technol., 159, 107197. https://doi.org/10.1016/j.infsof.2023.107197.

Cheong, B. (2024). Transparency and accountability in AI systems: safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics. https://doi.org/10.3389/fhumd.2024.1421273.

Birhane, A., Steed, R., Ojewale, V., Vecchione, B., & Raji, I. (2024). AI auditing: The Broken Bus on the Road to AI Accountability. 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 612-643. https://doi.org/10.1109/SaTML59370.2024.00037.

Murikah, W., Nthenge, J., & Musyoka, F. (2024). Bias and Ethics of AI Systems Applied in Auditing - A Systematic Review. Scientific African. https://doi.org/10.1016/j.sciaf.2024.e02281.

Novelli, C., Taddeo, M., & Floridi, L. (2023). Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY, 1-12. https://doi.org/10.2139/ssrn.4180366.

Tóth, Z., Caruana, R., Gruber, T., & Loebbecke, C. (2022). The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability. Journal of Business Ethics, 178, 895 - 916. https://doi.org/10.1007/s10551-022-05050-z.

Schmidt, J., Bartsch, S., Adam, M., & Benlian, A. (2025). Elevating Developers' Accountability Awareness in AI Systems Development. Bus. Inf. Syst. Eng., 67, 109-135. https://doi.org/10.1007/s12599-024-00914-2.

Miguel, B., Naseer, A., & Inakoshi, H. (2020). Putting Accountability of AI Systems into Practice. , 5276-5278. https://doi.org/10.24963/ijcai.2020/768.

(2023). Advancing accountability in AI. OECD Digital Economy Papers. https://doi.org/10.1787/2448f04b-en.

Kim, B., & Doshi-Velez, F. (2021). Machine Learning Techniques for Accountability. AI Mag., 42, 47-52. https://doi.org/10.1002/j.2371-9621.2021.tb00010.x.

Som, C., Hilty, L., & Köhler, A. (2009). The Precautionary Principle as a Framework for a Sustainable Information Society. Journal of Business Ethics, 85, 493-505. https://doi.org/10.1007/S10551-009-0214-X.

Botes, M. (2023). Regulating scientific and technological uncertainty: The precautionary principle in the context of human genomics and AI. South African journal of science, 119. https://doi.org/10.17159/sajs.2023/15037.

Druzin, B., Boute, A., & Ramsden, M. (2025). Confronting Catastrophic Risk: The International Obligation to Regulate Artificial Intelligence. ArXiv, abs/2503.18983. https://doi.org/10.36642/mjil.46.2.confronting.

Bengio, Y., Cohen, M., Fornasiere, D., Ghosn, J., Greiner, P., MacDermott, M., Mindermann, S., Oberman, A., Richardson, J., Richardson, O., Rondeau, M., St-Charles, P., & Williams-King, D. (2025). Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?. ArXiv, abs/2502.15657. https://doi.org/10.48550/arXiv.2502.15657.

Maior, D. (2025). THE NORMATIVE SIGNIFICANCE OF THE RECAUTIONARY PRINCIPLE IN ARTIFICIAL INTELLIGENCE PROBLEM. Curentul Juridic/Juridical Current. https://doi.org/10.62838/cjjc-2024-0040.

Hansson, S. (2020). How Extreme Is the Precautionary Principle?. NanoEthics, 14, 245 - 257. https://doi.org/10.1007/s11569-020-00373-5.

Kaivanto, K. (2025). The Precautionary Principle and the Innovation Principle: Incompatible Guides for AI Innovation Governance?. .

Miller, H., & Engemann, C. (2019). The precautionary principle and unforeseen consequences. Kybernetes , 48, 265-286. https://doi.org/10.1108/K-01-2018-0050 .

Defur, P., & Kaszuba, M. (2002). Implementing the precautionary principle.. The Science of the total environment, 288 1-2, 155-65 . https://doi.org/10.1016/S0048-9697(01)01107-X.

Yin, M., & Zou, K. (2021). The Implementation of the Precautionary Principle in Nuclear Safety Regulation: Challenges and Prospects. Sustainability. https://doi.org/10.3390/su132414033.

Foster, K., Vecchia, P., & Repacholi, M. (2000). Science and the Precautionary Principle. Science, 288, 979 - 981. https://doi.org/10.1126/SCIENCE.288.5468.979.

(2023). Precaução e inovação: uma análise da regulação de riscos no uso da inteligência artificial. Revista de Direito Empresarial. https://doi.org/10.52028/rdemp.v20i1_art08.

Fernandes, R., & Oliveira, L. (2021). A REGULAÇÃO DO AGIR DECISÓRIO DISRUPTIVO NO JUDICIÁRIO BRASILEIRO E A OBSERVÂNCIA DO PRINCÍPIO DA PRECAUÇÃO: JUIZ NATURAL OU "JUIZ ARTIFICIAL"?. , 19, 91-117. https://doi.org/10.12662/2447-6641OJ.V19I30.P91-117.2021.

Aifen, X., Ge, Y., & Khan, I. (2023). Preventing Terrorism with Precaution: An Examination of the Precautionary Principle to Counter-Terrorism Measures. Journal of Law & Social Studies. https://doi.org/10.52279/jlss.05.02.153162.

Stefánsson, H. (2019). On the Limits of the Precautionary Principle. Risk Analysis, 39. https://doi.org/10.1111/risa.13265.

Minkkinen, M., Laine, J., & Mäntymäki, M. (2022). Continuous Auditing of Artificial Intelligence: a Conceptualization and Assessment of Tools and Frameworks. Digital Society, 1. https://doi.org/10.1007/s44206-022-00022-2.

Enqvist, L. (2023). ‘Human oversight’ in the EU artificial intelligence act: what, when and by whom?. Law, Innovation and Technology, 15, 508 - 535. https://doi.org/10.1080/17579961.2023.2245683.

Ho-Dac, M., & Martinez, B. (2024). Human Oversight of Artificial Intelligence and Technical Standardisation. ArXiv, abs/2407.17481. https://doi.org/10.48550/arXiv.2407.17481.

Laux, J. (2023). Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act. Ai & Society, 39, 2853 - 2866. https://doi.org/10.1007/s00146-023-01777-z.

Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: themes, knowledge gaps and future agendas. Internet Res., 33, 133-167. https://doi.org/10.1108/intr-01-2022-0042.

Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40, 137 - 157. https://doi.org/10.1080/14494035.2021.1928377.

Shneiderman, B. (2016). Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proceedings of the National Academy of Sciences, 113, 13538 - 13540. https://doi.org/10.1073/pnas.1618211113.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389 - 399. https://doi.org/10.1038/s42256-019-0088-2.

Huang, C., Zhang, Z., Mao, B., & Yao, X. (2023). An Overview of Artificial Intelligence Ethics. IEEE Transactions on Artificial Intelligence, 4, 799-819. https://doi.org/10.1109/TAI.2022.3194503.

Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. J. Database Manag., 31, 74-87. https://doi.org/10.4018/jdm.2020040105.

Jedličková, A. (2024). Ensuring Ethical Standards in the Development of Autonomous and Intelligent Systems. IEEE Transactions on Artificial Intelligence, 5, 5863-5872. https://doi.org/10.1109/TAI.2024.3387403.

Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. Int. J. Inf. Manag., 62, 102433. https://doi.org/10.1016/j.ijinfomgt.2021.102433.

Ferrell, O., Harrison, D., Ferrell, L., Ajjan, H., & Hochstein, B. (2024). A theoretical framework to guide AI ethical decision making. AMS Review. https://doi.org/10.1007/s13162-024-00275-9.

Osasona, F., Amoo, O., Atadoga, A., Abrahams, T., Farayola, O., & Ayinla, B. (2024). REVIEWING THE ETHICAL IMPLICATIONS OF AI IN DECISION MAKING PROCESSES. International Journal of Management & Entrepreneurship Research. https://doi.org/10.51594/ijmer.v6i2.773.

Hanna, M., Pantanowitz, L., Jackson, B., Palmer, O., Visweswaran, S., Pantanowitz, J., Deebajah, M., & Rashidi, H. (2024). Ethical and Bias Considerations in Artificial Intelligence (AI)/Machine Learning.. Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc, 100686 . https://doi.org/10.1016/j.modpat.2024.100686.

Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Darrell, T., Harari, Y., Zhang, Y., Xue, L., Shalev-Shwartz, S., Hadfield, G., Clune, J., Maharaj, T., Hutter, F., Baydin, A., McIlraith, S., Gao, Q., Acharya, A., Krueger, D., Dragan, A., Torr, P., Russell, S., Kahneman, D., Brauner, J., & Mindermann, S. (2023). Managing extreme AI risks amid rapid progress. Science, 384, 842 - 845. https://doi.org/10.1126/science.adn0117.

Steimers, A., & Schneider, M. (2022). Sources of Risk of AI Systems. International Journal of Environmental Research and Public Health, 19. https://doi.org/10.3390/ijerph19063641.

Lin, H., & Liu, W. (2020). Risks and Prevention in the Application of AI. , 700-704. https://doi.org/10.1007/978-3-030-62746-1_104.

Nyavor, H. (2025). Al powered disease, prevention: Predicting health risks through machine learning for proactive care approaches. International Journal of Science and Research Archive. https://doi.org/10.30574/ijsra.2025.15.1.1018.

Lee, M. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5. https://doi.org/10.1177/2053951718756684.

Bogert, E., Schecter, A., & Watson, R. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11. https://doi.org/10.1038/s41598-021-87480-9.

Li, Y., & Goel, S. (2024). Making It Possible for the Auditing of AI: A Systematic Review of AI Audits and AI Auditability. Information Systems Frontiers. https://doi.org/10.1007/s10796-024-10508-8.

Jeyarajan, B., Murugan, A., Pandy, G., & Pugazhenthi, V. (2025). AI for Predictive Monitoring and Anomaly Detection in DevOps Environments. SoutheastCon 2025, 450-455. https://doi.org/10.1109/SoutheastCon56624.2025.10971552.

Gummadi, A., Napier, J., & Abdallah, M. (2024). XAI-IoT: An Explainable AI Framework for Enhancing Anomaly Detection in IoT Systems. IEEE Access, 12, 71024-71054. https://doi.org/10.1109/ACCESS.2024.3402446.

Калі, У., Катак, Ф., та Халден, У. (2024). Надійні кіберфізичні енергетичні системи з використанням штучного інтелекту: алгоритми дуелі для виявлення аномалій PMU та кібербезпеки. Artif. Intell. Rev. , 57, 183. https://doi.org/10.1007/s10462-024-10827-x .

Mishra, S., & Nayak, S. (2025). AI-Driven Anomaly Detection and Performance Optimization in Background Screening Systems. Scholars Journal of Engineering and Technology. https://doi.org/10.36347/sjet.2025.v13i02.006.

Kaur, D., Uslu, S., Rittichier, K., & Durresi, A. (2022). Trustworthy Artificial Intelligence: A Review. ACM Computing Surveys (CSUR), 55, 1 - 38. https://doi.org/10.1145/3491209.

Mart'inez-Fern'andez, S., Bogner, J., Franch, X., Oriol, M., Siebert, J., Trendowicz, A., Vollmer, A., & Wagner, S. (2021). Software Engineering for AI-Based Systems: A Survey. ACM Transactions on Software Engineering and Methodology (TOSEM), 31, 1 - 59. https://doi.org/10.1145/3487043.

Sekar, A. (2025). The role of AI/ML in improving system reliability of large-scale distributed systems. World Journal of Advanced Research and Reviews. https://doi.org/10.30574/wjarr.2025.26.1.1064.

Sheikh, N. (2025). AI-Driven Observability: Enhancing System Reliability and Performance. Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023. https://doi.org/10.60087/jaigs.v7i01.322.

Cheng, Q., Huang, M., Man, C., Shen, A., Dai, L., Yu, H., & Hashimoto, M. (2023). Reliability Exploration of System-on-Chip With Multi-Bit-Width Accelerator for Multi-Precision Deep Neural Networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 70, 3978-3991. https://doi.org/10.1109/TCSI.2023.3300899.

Gnad, D., Gotthard, M., Krautter, J., Kritikakou, A., Meyers, V., Rech, P., Condia, J., Ruospo, A., Sánchez, E., Santos, F., Sentieys, O., Tahoori, M., Tessier, R., & Traiola, M. (2024). Reliability and Security of AI Hardware. 2024 IEEE European Test Symposium (ETS), 1-10. https://doi.org/10.1109/ETS61313.2024.10567471.

Moskalenko, V., Kharchenko, V., & Semenov, S. (2024). Model and Method for Providing Resilience to Resource-Constrained AI-System. Sensors (Basel, Switzerland), 24. https://doi.org/10.3390/s24185951.

Moskalenko, V., Kharchenko, V., Moskalenko, A., & Kuzikov, B. (2023). Resilience and Resilient Systems of Artificial Intelligence: Taxonomy, Models and Methods. Algorithms, 16, 165. https://doi.org/10.3390/a16030165.

Cody, T., & Beling, P. (2023). Towards operational resilience for AI-based cyber in multi-domain operations. , 12538, 125381D - 125381D-6. https://doi.org/10.1117/12.2675862.

Moskalenko, V., Moskalenko, A., Kudryavtsev, A., & Moskalenko, Y. (2024). Resilience-aware MLOps for Resource-constrained AI-system. , 462-473.

Moskalenko, V., Moskalenko, A., Kudryavtsev, A., & Moskalenko, Y. (2023). Robustness and robust artificial intelligence systems: taxonomy, models, and methods. Algorithms, 16, 165. https://doi.org/10.3390/a16030165 .

Novelli, C., Taddeo, M., & Floridi, L. (2023). Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY, 1-12. https://doi.org/10.2139/ssrn.4180366.

Busuioc, M. (2020). Accountable Artificial Intelligence: Holding Algorithms to Account. Public Administration Review, 81, 825 - 836. https://doi.org/10.1111/puar.13293.

Van De Poel, I. (2020). Embedding Values in Artificial Intelligence (AI) Systems. Minds and Machines, 30, 385 - 409. https://doi.org/10.1007/s11023-020-09537-4.

Stettinger, G., Weissensteiner, P., & Khastgir, S. (2024). Trustworthiness Assurance Assessment for High-Risk AI-Based Systems. IEEE Access, 12, 22718-22745. https://doi.org/10.1109/ACCESS.2024.3364387.

Itsuji, H., Uezono, T., Toba, T., & Kundu, S. (2024). Real-Time Diagnostic Technique for AI-Enabled System. IEEE Open Journal of Intelligent Transportation Systems, 5, 483-494. https://doi.org/10.1109/OJITS.2024.3435712.

Zdravković, M., Panetto, H., & Weichhart, G. (2021). AI-enabled Enterprise Information Systems for Manufacturing. Enterprise Information Systems, 16, 668 - 720. https://doi.org/10.1080/17517575.2021.1941275.

Murikah, W., Nthenge, J., & Musyoka, F. (2024). Bias and Ethics of AI Systems Applied in Auditing - A Systematic Review. Scientific African. https://doi.org/10.1016/j.sciaf.2024.e02281.

Rovzanec, J., Novalija, I., Zajec, P., Kenda, K., Tavakoli, H., Suh, S., Veliou, E., Papamartzivanos, D., Giannetsos, T., Menesidou, S., Alonso, R., Cauli, N., Meloni, A., Recupero, D., Kyriazis, D., Sofianidis, G., Theodoropoulos, S., Fortuna, B., Mladeni'c, D., & Soldatos, J. (2022). Human-centric artificial intelligence architecture for industry 5.0 applications. International Journal of Production Research, 61, 6847 - 6872. https://doi.org/10.1080/00207543.2022.2138611.

Riedl, M. (2019). Human-Centered Artificial Intelligence and Machine Learning. ArXiv, abs/1901.11184. https://doi.org/10.1002/HBE2.117.

Amariles, D., & Baquero, P. (2023). Promises and limits of law for a human-centric artificial intelligence. Comput. Law Secur. Rev., 48, 105795. https://doi.org/10.1016/j.clsr.2023.105795.

Kumar, S., Datta, S., Singh, V., Datta, D., Singh, S., & Sharma, R. (2024). Applications, Challenges, and Future Directions of Human-in-the-Loop Learning. IEEE Access, 12, 75735-75760. https://doi.org/10.1109/ACCESS.2024.3401547.

Zhang, P., Liu, W., & Shao, J. (2022). Research on Human-in-the-loop Traffic Adaptive Decision Making Method. 2022 4th International Conference on Robotics and Computer Vision (ICRCV), 272-276. https://doi.org/10.1109/ICRCV55858.2022.9953216.

Enarsson, T., Enqvist, L., & Naarttijärvi, M. (2021). Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law, 31, 123 - 153. https://doi.org/10.1080/13600834.2021.1958860.

Johnson, J. (2022). Automating the OODA loop in the age of intelligent machines: reaffirming the role of humans in command-and-control decision-making in the digital age. Defence Studies, 23, 43 - 67. https://doi.org/10.1080/14702436.2022.2102486.

Trunk, A., Birkel, H., & Hartmann, E. (2020). On the current state of combining human and artificial intelligence for strategic organizational decision making. Business Research. https://doi.org/10.1007/s40685-020-00133-x.

Wulf, A., & Seizov, O. (2022). "Please understand we cannot provide further information": evaluating content and transparency of GDPR-mandated AI disclosures. AI Soc., 39, 235-256. https://doi.org/10.1007/s00146-022-01424-z.

Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Cybersecurity. https://doi.org/10.2139/ssrn.3063289.

Malgieri, G. (2019). Automated decision-making in the EU Member States: The right to explanation and other "suitable safeguards" in the national legislations. Comput. Law Secur. Rev., 35, 105327. https://doi.org/10.1016/J.CLSR.2019.05.002.

Cobbe, J., & Singh, J. (2020). Reviewable Automated Decision-Making. Comput. Law Secur. Rev., 39, 105475. https://doi.org/10.1016/j.clsr.2020.105475.

Nassen, L., Vandebosch, H., Poels, K., & Karsay, K. (2023). Opt-out, abstain, unplug. A systematic review of the voluntary digital disconnection literature. Telematics Informatics, 81, 101980. https://doi.org/10.1016/j.tele.2023.101980.

Chen, T., Guo, W., Gao, X., & Liang, Z. (2020). AI-based self-service technology in public service delivery: User experience and influencing factors. Gov. Inf. Q., 38, 101520. https://doi.org/10.1016/j.giq.2020.101520.

Alabed, A., Javornik, A., & Gregory‐Smith, D. (2022). AI anthropomorphism and its effect on users' self-congruence and self–AI integration: A theoretical framework and research agenda. Technological Forecasting and Social Change. https://doi.org/10.1016/j.techfore.2022.121786.

Felzmann, H., Villaronga, E., Lutz, C., & Tamó-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6. https://doi.org/10.1177/2053951719860542.

Buiten, M. (2019). Towards Intelligent Regulation of Artificial Intelligence. European Journal of Risk Regulation, 10, 41 - 59. https://doi.org/10.1017/err.2019.8.

Du, Y. (2022). On the Transparency of Artificial Intelligence System. Journal of Autonomous Intelligence. https://doi.org/10.32629/jai.v5i1.486.

Popovych, T. (2024). Legal obligations of transparency in the field of artificial intelligence. Uzhhorod National University Herald. Series: Law. https://doi.org/10.24144/2307-3322.2024.85.4.59.

Alakbarzadeh, V. (2025). Beynəlxalq hüquq kontekstində süni intellekt - insan hüquqları və hüquqi hesabatlılıq çağırışları. Azerbaijan Law Journal. https://doi.org/10.61638/yzzp8534.

Kolarević, E. (2022). The influence of Artificial intelligence on the right to freedom of expression. Pravo - teorija i praksa. https://doi.org/10.5937/ptp2201111k.

Polok, B., El-Taj, H., & Rana, A. (2023). Balancing Potential and Peril: The Ethical Implications of Artificial Intelligence on Human Rights. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4484386.

Jungherr, A. (2023). Artificial Intelligence and Democracy: A Conceptual Framework. Social Media + Society, 9. https://doi.org/10.1177/20563051231186353.

Kouroupis, K. (2024). AI and politics: ensuring or threatening democracy?. Juridical Tribune. https://doi.org/10.24818/tbj/2023/13/4.05.

Teckchandani, J. (2024). AI in International Politics. International Journal for Research in Applied Science and Engineering Technology. https://doi.org/10.22214/ijraset.2024.58934.

Bender, S. (2022). Algorithmic Elections. Michigan Law Review. https://doi.org/10.36644/mlr.121.3.algorithmic.

Savaget, P., Chiarini, T., & Evans, S. (2018). Empowering political participation through artificial intelligence. Science & Public Policy, 46, 369 - 380. https://doi.org/10.1093/scipol/scy064.

Candrian, C., & Scherer, A. (2022). Rise of the machines: Delegating decisions to autonomous AI. Comput. Hum. Behav., 134, 107308. https://doi.org/10.1016/j.chb.2022.107308.

Dodig-Crnkovic, G., Basti, G., & Holstein, T. (2024). Delegating Responsibilities to Intelligent Autonomous Systems: Challenges and Benefits. Journal of bioethical inquiry. https://doi.org/10.48550/arXiv.2411.15147.

Tretter, M. (2025). Opportunities and challenges of AI-systems in political decision-making contexts. Frontiers in Political Science. https://doi.org/10.3389/fpos.2025.1504520.

Caiza, G., Sanguña, V., Tusa, N., Masaquiza, V., Ortiz, A., & Garcia, M. (2024). Navigating Governmental Choices: A Comprehensive Review of Artificial Intelligence's Impact on Decision-Making. Informatics, 11, 64. https://doi.org/10.3390/informatics11030064.

Presuel, R., & Sierra, J. (2023). The Adoption of Artificial Intelligence in Bureaucratic Decision-making: A Weberian Perspective. Digital Government: Research and Practice, 5, 1 - 20. https://doi.org/10.1145/3609861.

Kolkman, D., Bex, F., Narayan, N., & Van Der Put, M. (2024). Justitia ex machina: The impact of an AI system on legal decision-making and discretionary authority. Big Data & Society, 11. https://doi.org/10.1177/20539517241255101.

Morić, Z., Dakić, V., & Urošev, S. (2025). An AI-Based Decision Support System Utilizing Bayesian Networks for Judicial Decision-Making. Systems. https://doi.org/10.3390/systems13020131.

Greenstein, S. (2021). Preserving the rule of law in the era of artificial intelligence (AI). Artificial Intelligence and Law, 30, 291 - 323. https://doi.org/10.1007/s10506-021-09294-4.

Formosa, P., Rogers, W., Griep, Y., Bankins, S., & Richards, D. (2022). Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts. Comput. Hum. Behav., 133, 107296. https://doi.org/10.1016/j.chb.2022.107296.

Orwat, C. (2024). Algorithmic Discrimination From the Perspective of Human Dignity. Social Inclusion. https://doi.org/10.17645/si.7160.

Alakwe, K. (2023). Human Dignity in the Era of Artificial Intelligence and Robotics: Issues and Prospects. Journal of Humanities and Social Sciences Studies. https://doi.org/10.32996/jhsss.2023.5.6.10.

Federspiel, F., Mitchell, R., Asokan, A., Umaña, C., & Mccoy, D. (2023). Threats by artificial intelligence to human health and human existence. BMJ Global Health, 8. https://doi.org/10.1136/bmjgh-2022-010435.

Ahmad, S., Han, H., Alam, M., Rehmat, M., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities & Social Sciences Communications, 10. https://doi.org/10.1057/s41599-023-01787-8.

Gürkaynak, G., Yilmaz, I., & Haksever, G. (2016). Stifling artificial intelligence: Human perils. Comput. Law Secur. Rev., 32, 749-758. https://doi.org/10.1016/J.CLSR.2016.05.003.

Chong, L., Zhang, G., Goucher-Lambert, K., Kotovsky, K., & Cagan, J. (2022). Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Comput. Hum. Behav., 127, 107018. https://doi.org/10.1016/j.chb.2021.107018.

Merlec, M., Lee, Y., Hong, S., & In, H. (2021). A Smart Contract-Based Dynamic Consent Management System for Personal Data Usage under GDPR. Sensors (Basel, Switzerland), 21. https://doi.org/10.3390/s21237994.

Florea, M. (2023). Withdrawal of consent for processing personal data in biomedical research. International Data Privacy Law. https://doi.org/10.1093/idpl/ipad008.

Hagendorff, T. (2019). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30, 99 - 120. https://doi.org/10.1007/s11023-020-09517-8.

Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. Int. J. Inf. Manag., 62, 102433. https://doi.org/10.1016/j.ijinfomgt.2021.102433.

Ortega-Bolaños, R., Bernal-Salcedo, J., Ortiz, M., Sarmiento, J., Ruz, G., & Tabares-Soto, R. (2024). Applying the ethics of AI: a systematic review of tools for developing and assessing AI-based systems. Artif. Intell. Rev., 57, 110. https://doi.org/10.1007/s10462-024-10740-3.

Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. J. Database Manag., 31, 74-87. https://doi.org/10.4018/jdm.2020040105.

Murikah, W., Nthenge, J., & Musyoka, F. (2024). Bias and Ethics of AI Systems Applied in Auditing - A Systematic Review. Scientific African. https://doi.org/10.1016/j.sciaf.2024.e02281.

Goldenthal, E., Park, J., Liu, S., Mieczkowski, H., & Hancock, J. (2021). Not All AI are Equal: Exploring the Accessibility of AI-Mediated Communication Technology. Comput. Hum. Behav., 125, 106975. https://doi.org/10.1016/j.chb.2021.106975.

Singh, K., & , C. (2024). Bias and Fairness in Artificial Intelligence: Methods and Mitigation Strategies. International Journal for Research Publication and Seminar. https://doi.org/10.36676/jrps.v15.i3.1425.

Davoodi, A. (2024). EQUAL AI: A Framework for Enhancing Equity, Quality, Understanding and Accessibility in Liberal Arts through AI for Multilingual Learners. Language, Technology, and Social Media. https://doi.org/10.70211/ltsm.v2i2.139.

Androutsopoulou, A., Karacapilidis, N., Loukis, E., & Charalabidis, Y. (2019). Transforming the communication between citizens and government through AI-guided chatbots. Gov. Inf. Q., 36, 358-367. ttps://doi.org/10.1016/J.GIQ.2018.10.001.

Sezgin, E., & Kocaballi, A. (2024). Era of Generalist Conversational Artificial Intelligence to Support Public Health Communications. Journal of Medical Internet Research, 27. https://doi.org/10.2196/69007.

Türksoy, N. (2022). The Future of Public Relations, Advertising and Journalism: How Artificial Intelligence May Transform the Communication Profession and Why Society Should Care. Türkiye İletişim Araştırmaları Dergisi/26306220. https://doi.org/10.17829/turcom.1050491.

Guzman, A., & Lewis, S. (2019). Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media & Society, 22, 70 - 86. https://doi.org/10.1177/1461444819858691.

Soldan, T. (2022). A Qualitative Research on The Use of Artificial Intelligence in Public Relations. The Journal of International Scientific Researches. https://doi.org/10.23834/isrjournal.1113438.

Alkrisheh, M., & Gourari, F. (2025). CRIMINAL LIABILITY FOR PAID DISINFORMATION IN THE DIGITAL WORLD: A COMPARATIVE STUDY BETWEEN UAE LAW AND THE EUROPEAN DIGITAL SERVICES ACT (DSA). Access to Justice in Eastern Europe. https://doi.org/10.33327/ajee-18-8.2-r000110.

Gomathy, D., Geetha, V., Manohar, S., & Rajesh, P. (2024). LEGAL FRAMEWORKS FOR REGULATING CYBERCRIME AND CYBER TERRORISM. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT. https://doi.org/10.55041/ijsrem37509.

Shinde, N. (2025). CYBER TERRORISM: THE EMERGING THREAT IN THE DIGITAL AGE. International Journal For Multidisciplinary Research. https://doi.org/10.36948/ijfmr.2025.v07i02.43602.

Radoniewicz, F. (2021). Zwalczanie cyberterroryzmu w ramach UE – wybrane aspekty karnomaterialne. Cybersecurity and Law. https://doi.org/10.35467/cal/133898.

Al-Shair, M. (2021). Legal problems in confronting digital terrorism. Statistic study. Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ). https://doi.org/10.55562/jrucs.v28i2.385.

Bajpai, S. (2020). Legal Framework on Cyber Terrorism. , 40, 933-945.

Taylor, R., Fritsch, E., & Liederbach, J. (2005). Digital Crime and Digital Terrorism.

Kolkman, D., Bex, F., Narayan, N., & Van Der Put, M. (2024). Justitia ex machina: The impact of an AI system on legal decision-making and discretionary authority. Big Data & Society, 11. https://doi.org/10.1177/20539517241255101.

Atkinson, K., Bench-Capon, T., & Bollegala, D. (2020). Explanation in AI and law: Past, present and future. Artif. Intell., 289, 103387. https://doi.org/10.1016/J.ARTINT.2020.103387.

Vujicic, J. (2025). AI Ethics in Legal Decision-Making Bias, Transparency, And Accountability. International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering. https://doi.org/10.15662/ijareeie.2025.1404001.

Scherer, M. (2019). Artificial Intelligence and Legal Decision-Making: The Wide Open?. Journal of International Arbitration. https://doi.org/10.54648/joia2019028.

Dokumacı, M. (2024). AI-Driven Econometric Models for Legal Issues. Human Computer Interaction. https://doi.org/10.62802/btfvze98.

Samee, N., Alabdulhafith, M., Shah, M., & Rizwan, A. (2024). JusticeAI: A Large Language Models Inspired Collaborative and Cross-Domain Multimodal System for Automatic Judicial Rulings in Smart Courts. IEEE Access, 12, 173091-173107. https://doi.org/10.1109/ACCESS.2024.3491775.

Andriati, S., Rizki, I., & Malian, A. (2024). Justice on Trial: How Artificial Intelligence is Reshaping Judicial Decision-Making. Journal of Indonesian Legal Studies. https://doi.org/10.15294/jils.v9i2.13683.

Mohan, B., & , D. (2023). The Ethics of Artificial Intelligence in Legal Decision Making: An Empirical Study. psychologyandeducation. https://doi.org/10.48047/pne.2018.55.1.38.

Sharma, R. (2023). 36 Exploring the Ethical Implications of AI in Legal Decision-Making. Indian Journal of Law. https://doi.org/10.36676/ijl.2023-v1i1-06.

Roberts, H. (2024). Digital sovereignty and artificial intelligence: a normative approach. Ethics Inf. Technol., 26, 70. https://doi.org/10.1007/s10676-024-09810-5.

Usman, H., Nawaz, B., & Naseer, S. (2023). The Future of State Sovereignty in the Age of Artificial Intelligence. Journal of Law & Social Studies. https://doi.org/10.52279/jlss.05.02.142152.

Klare, M., Verlande, L., Greiner, M., & Lechner, U. (2022). How Blockchain and Artificial Intelligence influence Digital Sovereignty. , 3-16. https://doi.org/10.1007/978-3-031-30694-5_1.

Al-Zubaidi, R., & Zeidan, R. (2024). Artificial Intelligence Technology and its Implications for the Sovereignty of the Nation-State. International Journal of Educational Sciences and Arts. https://doi.org/10.59992/ijesa.2024.v3n5p3.

Goralski, M., & Tan, T. (2020). Artificial intelligence and sustainable development. The International Journal of Management Education. https://doi.org/10.1016/j.ijme.2019.100330.

Dhamija, P., & Bag, S. (2020). Role of artificial intelligence in operations environment: a review and bibliometric analysis. The TQM Journal. https://doi.org/10.1108/tqm-10-2019-0243.

Al-Zubaidi, R., & Zeidan, R. (2024). Artificial Intelligence Technology and its Implications for the Sovereignty of the Nation-State. International Journal of Educational Sciences and Arts. https://doi.org/10.59992/ijesa.2024.v3n5p3.

March, C., & Schieferdecker, I. (2023). Technological Sovereignty as Ability, Not Autarky. CESifo: Macro. https://doi.org/10.1093/isr/viad012.

Curzon, J., Kosa, T., Akalu, R., & El-Khatib, K. (2021). Privacy and Artificial Intelligence. IEEE Transactions on Artificial Intelligence, 2, 96-108. https://doi.org/10.1109/TAI.2021.3088084.

Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. Int. J. Inf. Manag., 62, 102433. https://doi.org/10.1016/j.ijinfomgt.2021.102433.

Saura, J., Ribeiro-Soriano, D., & Palacios-Marqués, D. (2022). Assessing behavioral data science privacy issues in government artificial intelligence deployment. Gov. Inf. Q., 39, 101679. https://doi.org/10.1016/j.giq.2022.101679.

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernández, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, G., Tiropanis, T., & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10. https://doi.org/10.1002/widm.1356.

Raimundo, R., & Rosário, A. (2021). The Impact of Artificial Intelligence on Data System Security: A Literature Review. Sensors (Basel, Switzerland), 21. https://doi.org/10.3390/s21217029.

Roberts, H. (2024). Digital sovereignty and artificial intelligence: a normative approach. Ethics Inf. Technol., 26, 70. https://doi.org/10.1007/s10676-024-09810-5.

Calderaro, A., & Blumfelde, S. (2022). Artificial intelligence and EU security: the false promise of digital sovereignty. European Security, 31, 415 - 434. https://doi.org/10.1080/09662839.2022.2101885.

Nanni, R., Bizzaro, P., & Napolitano, M. (2024). The false promise of individual digital sovereignty in Europe: Comparing artificial intelligence and data regulations in China and the European Union. Policy & Internet. https://doi.org/10.1002/poi3.424.

Floridi, L. (2020). The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU. Philosophy & Technology, 33, 369 - 378. https://doi.org/10.1007/s13347-020-00423-6.

Costa-Barbosa, A., Herlo, B., & Joost, G. (2024). Digital Sovereignty in times of AI: between perils of hegemonic agendas and possibilities of alternative approaches. Liinc em Revista. https://doi.org/10.18617/liinc.v20i2.7312.

Circiumaru, A. (2021). The EU’s Digital Sovereignty - The role of Artificial Intelligence and Competition Policy. Social Science Research Network. https://doi.org/10.2139/SSRN.3831815.

Sheikh, H. (2022). European Digital Sovereignty: A Layered Approach. Digital Society, 1. https://doi.org/10.1007/s44206-022-00025-z.

Valente, J. (2024). Data Workers in AI development. Liinc em Revista. https://doi.org/10.18617/liinc.v20i2.7302.

Schmitt, M. (2023). Securing the digital world: Protecting smart infrastructures and digital industries with artificial intelligence (AI)-enabled malware and intrusion detection. J. Ind. Inf. Integr., 36, 100520. https://doi.org/10.1016/j.jii.2023.100520.

Dall'Agnol, A. (2022). Artificial Intelligence and the Future of War: The United States, China, and Strategic Stability. Journal of Strategic Studies , 46, 749-751. https://doi.org/10.1080/01402390.2022.2104255 .

Johnson, J. (2021). Artificial intelligence and the future of warfare. . https://doi.org/10.7765/9781526145062.

Wilson, C. (2020). Artificial Intelligence and Warfare. , 125-140. https://doi.org/10.1007/978-3-030-28285-1_7.

Маас, М. (2019). Наскільки життєздатним є міжнародний контроль над озброєннями для військового штучного інтелекту? Три уроки ядерної зброї. Сучасна політика безпеки , 40, 285-311. https://doi.org/10.1080/13523260.2019.1576464 .

Dresp-Langley, B. (2023). The weaponization of artificial intelligence: What the public needs to be aware of. Frontiers in Artificial Intelligence, 6. https://doi.org/10.3389/frai.2023.1154184.

Abaimov, S., & Martellini, M. (2020). Artificial Intelligence in Autonomous Weapon Systems. 21st Century Prometheus. https://doi.org/10.1007/978-3-030-28285-1_8.

Zohuri, B. (2024). Harnessing Artificial Intelligence for Countering Hypersonic Weapons:A New Frontier in Battlefield Offense and Defense(A Short Review). Journal of Energy and Power Engineering. https://doi.org/10.17265/1934-8975/2024.04.002.

Cîrdei, I. (2024). The Use of Artificial Intelligence and Autonomous Weapon Systems in Military Operations. International conference KNOWLEDGE-BASED ORGANIZATION, 30, 43 - 51. https://doi.org/10.2478/kbo-2024-0006.

Rashid, A., Kausik, A., Sunny, A., & Bappy, M. (2023). Artificial Intelligence in the Military: An Overview of the Capabilities, Applications, and Challenges. Int. J. Intell. Syst., 2023, 1-31. https://doi.org/10.1155/2023/8676366.

Asaro, P. (2020). Autonomous Weapons and the Ethics of Artificial Intelligence. Ethics of Artificial Intelligence. https://doi.org/10.1093/oso/9780190905033.003.0008.

Sharan, Y., Gordon, T., & Florescu, E. (2021). Artificial Intelligence and Autonomous Weapons. Tripping Points on the Roads to Outwit Terror. https://doi.org/10.1007/978-3-030-72571-6_7.

Heinz, A. (2025). The militarization of artificial intelligence and the autonomous weapons. UNISCI Journal. https://doi.org/10.31439/unisci-222.

Márton, A. (2021). Steps toward a digital ecology: ecological principles for the study of digital ecosystems. Journal of Information Technology, 37, 250 - 265. https://doi.org/10.1177/02683962211043222.

Koch, M., Krohmer, D., Naab, M., Rost, D., & Trapp, M. (2022). A matter of definition: Criteria for digital ecosystems. Digital Business. https://doi.org/10.1016/j.digbus.2022.100027.

Pekkarinen, S., Hasu, M., Melkas, H., & Saari, E. (2020). Information ecology in digitalising welfare services: a multi-level analysis. Inf. Technol. People, 34, 1697-1720. https://doi.org/10.1108/itp-12-2019-0635.

Petrova, E. (2022). Ecology of the Digital Environment as an Attempt to Respond to the Civilizational Challenges of the Digital Age. Voprosy Filosofii. https://doi.org/10.21146/0042-8744-2022-11-99-109.

Nedungadi, P., Devenport, K., Sutcliffe, R., & Raman, R. (2020). Towards a digital learning ecology to address the grand challenge in adult literacy. Interactive Learning Environments, 31, 383 - 396. https://doi.org/10.1080/10494820.2020.1789668.

Dunleavy, P., & Margetts, H. (2023). Data science, artificial intelligence and the third wave of digital era governance. Public Policy and Administration, 40, 185 - 214. https://doi.org/10.1177/09520767231198737.

Dwivedi, Y., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P., Janssen, M., Jones, P., Kar, A., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., Medaglia, R., Meunier-FitzHugh, K., Meunier-FitzHugh, L., Misra, S., Mogaji, E., Sharma, S., Singh, J., Raghavan, V., Raman, R., Rana, N., Samothrakis, S., Spencer, J., Tamilmani, K., Tubadji, A., Walton, P., & Williams, M. (2019). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management. https://doi.org/10.1016/J.IJINFOMGT.2019.08.002.

Chen, L., Chen, P., & Lin, Z. (2020). Artificial Intelligence in Education: A Review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510.

Silcox, C., Zimlichman, E., Huber, K., Rowen, N., Saunders, R., McClellan, M., Kahn, C., Salzberg, C., & Bates, D. (2024). The potential for artificial intelligence to transform healthcare: perspectives from international health leaders. NPJ Digital Medicine, 7. https://doi.org/10.1038/s41746-024-01097-6.

Johnson, P., Laurell, C., Ots, M., & Sandström, C. (2022). Digital innovation and the effects of artificial intelligence on firms’ research and development – Automation or augmentation, exploration or exploitation?. Technological Forecasting and Social Change. https://doi.org/10.1016/j.techfore.2022.121636.

Ooi, K., Tan, G., Al-Emran, M., Al-Sharafi, M., Căpățînă, A., Chakraborty, A., Dwivedi, Y., Huang, T., Kar, A., Lee, V., Loh, X., Micu, A., Mikalef, P., Mogaji, E., Pandey, N., Raman, R., Rana, N., Sarker, P., Sharma, A., Teng, C., Wamba, S., & Wong, L. (2023). The Potential of Generative Artificial Intelligence Across Disciplines: Perspectives and Future Directions. Journal of Computer Information Systems, 65, 76 - 107. https://doi.org/10.1080/08874417.2023.2261010.

Aldoseri, A., Al-Khalifa, K., & Hamouda, A. (2024). AI-Powered Innovation in Digital Transformation: Key Pillars and Industry Impact. Sustainability. https://doi.org/10.3390/su16051790.

Filgueiras, F. (2023). Artificial intelligence and education governance. Education, Citizenship and Social Justice, 19, 349 - 361. https://doi.org/10.1177/17461979231160674.

Borgesius, F. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24, 1572 - 1593. https://doi.org/10.1080/13642987.2020.1743976.

Allen, R., & Masters, D. (2020). Artificial Intelligence: the right to protection from discrimination caused by algorithms, machine learning and automated decision-making. ERA Forum, 20, 585-598. https://doi.org/10.1007/s12027-019-00582-w.

Arnanz, A. (2023). Creating non-discriminatory Artificial Intelligence systems: balancing the tensions between code granularity and the general nature of legal rules. IDP. Revista de Internet, Derecho y Política. https://doi.org/10.7238/idp.v0i38.403794.

Schwitzgebel, E., & Garza, M. (2015). A Defense of the Rights of Artificial Intelligences. Midwest Studies in Philosophy, 39, 98-119. https://doi.org/10.1111/MISP.12032.

Aizenberg, E., & Van Den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7. https://doi.org/10.1177/2053951720949566.

Donahoe, E., & Metzger, M. (2019). Artificial Intelligence and Human Rights. Journal of Democracy, 30, 115 - 126. https://doi.org/10.1353/JOD.2019.0029.

Laitinen, A., & Sahlgren, O. (2021). AI Systems and Respect for Human Autonomy. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.705164.

Amariles, D., & Baquero, P. (2023). Promises and limits of law for a human-centric artificial intelligence. Comput. Law Secur. Rev., 48, 105795. https://doi.org/10.1016/j.clsr.2023.105795.

Orwat, C. (2024). Algorithmic Discrimination From the Perspective of Human Dignity. Social Inclusion. https://doi.org/10.17645/si.7160.

Lamers, L., Meijerink, J., Jansen, G., & Boon, M. (2022). A Capability Approach to worker dignity under Algorithmic Management. Ethics and Information Technology, 24. https://doi.org/10.1007/s10676-022-09637-y.

Filho, E., & Firmo, M. (2023). Human Dignity and neurorights in the Digital Age. Brazilian Journal of Law, Technology and Innovation. https://doi.org/10.59224/bjlti.v1i2.87-107.

Zhao, Y., & Ren, Z. (2025). The Alignment of Values: Embedding Human Dignity in Algorithmic Bias Governance for the AGI Era. International Journal of Digital Law and Governance, 0. https://doi.org/10.1515/ijdlg-2025-0006.

Ruster, L., Oliva-Altamirano, P., & Daniell, K. (2022). Centring dignity in algorithm development: testing a Dignity Lens. Proceedings of the 34th Australian Conference on Human-Computer Interaction. https://doi.org/10.1145/3572921.3572938.

Zwitter, A., Gstrein, O., & Yap, E. (2020). Digital Identity and the Blockchain: Universal Identity Management and the Concept of the “Self-Sovereign” Individual. , 3. https://doi.org/10.3389/fbloc.2020.00026.

Vardanyan, L., Hamuľák, O., & Kocharyan, H. (2024). Fragmented Identities: Legal Challenges of Digital Identity, Integrity, and Informational Self-Determination. European Studies, 11, 105 - 121. https://doi.org/10.2478/eustu-2024-0005.

Tan, K., Chi, C., & Lam, K. (2023). Survey on Digital Sovereignty and Identity: From Digitization to Digitalization. ACM Computing Surveys, 56, 1 - 36. https://doi.org/10.1145/3616400.

Huu, P. (2023). Impact of employee digital competence on the relationship between digital autonomy and innovative work behavior: a systematic review. Artificial Intelligence Review, 1 - 30. https://doi.org/10.1007/s10462-023-10492-6.

Savolainen, L., & Ruckenstein, M. (2022). Dimensions of autonomy in human–algorithm relations. New Media & Society, 26, 3472 - 3490. https://doi.org/10.1177/14614448221100802.

Kostenko, O. V. (2021) Management of Identification Data: Legal Regulation of Anonymization and Pseudonymization Naukovyi visnyk publichnoho ta pryvatnoho prava, 1, 123–131. https://doi.org/10.32844/2618-1258.2021.1.13

Pfitzmann, A., & Hansen, M. (2010). A terminology for talking about privacy by data minimization: Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and Identity Management. .

Pfitzmann, A., & Köhntopp, M. (2000). Anonymity, Unobservability, and Pseudonymity - A Proposal for Terminology. , 1-9. https://doi.org/10.1007/3-540-44702-4_1.

Froomkin, A. (1999). Legal Issues in Anonymity and Pseudonymity. Legal Perspectives in Information Systems eJournal. https://doi.org/10.1080/019722499128574.

Garcia-Grau, F., Herrera-Joancomartí, J., & Josa, A. (2022). Attribute Based Pseudonyms: Anonymous and Linkable Scoped Credentials. Mathematics. https://doi.org/10.3390/math10152548.

Federrath, H. (2001). Designing Privacy Enhancing Technologies. . https://doi.org/10.1007/3-540-44702-4.

Dange, A., Vilas, R., & , B. (2025). Enhancing Anonymity and Security in Networks: A Comprehensive Analysis of Pseudonym Manager (PM) and Nymble Manager (NM). Power System Technology. https://doi.org/10.52783/pst.1663.

Kurzynoga, M. (2024). The Right to Disconnect: Rest in the Digital Age of Work from the International, European and Polish Law Perspectives. Acta Universitatis Lodziensis. Folia Iuridica. https://doi.org/10.18778/0208-6069.107.06.

Kolomoets, E., Shoniya, G., Mekhmonov, S., Abdulnabi, S., & Karim, N. (2023). The Employee’s Right to Work Offline: A Comparative Analysis of Legal Frameworks in Different Countries. Revista de Gestão Social e Ambiental. https://doi.org/10.24857/rgsa.v17n5-009.

Reyna, J., Gabardo, E., & De Sousa Santos, F. (2020). Electronic government, digital invisibility and fundamental social rights. Seqüência: Estudos Jurídicos e Políticos. https://doi.org/10.5007/2177-7055.2020v41n85p30.

Wolski, O. (2021). The right to stay offline? Not during the pandemic. Journal of Information Technology & Politics, 19, 140 - 155. https://doi.org/10.1080/19331681.2021.1936845.

Mladenov, M., & Serotila, I. (2024). Right to be offline: To be or not to be?. XXI međunarodni naučni skup Pravnički dani - Prof. dr Slavko Carić, na temu: Odgovori pravne nauke na izazove savremenog društva - zbornik radova. https://doi.org/10.5937/pdsc24271m.

Gawełko-Bazan, K. (2024). The right to be offline – notes on the background of existing and proposed regulations. Part II: Polish law. Kwartalnik Prawa Międzynarodowego. https://doi.org/10.5604/01.3001.0054.4284.

Yang, P. (2024). Problems and Countermeasures of Legal Protection of Laborer’s “Off-line Right” in China in the Information Age. International Journal of Frontiers in Sociology. https://doi.org/10.25236/ijfs.2024.060307.

Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review. https://doi.org/10.54648/cola2018095.

Duani, N., Barasch, A., & Morwitz, V. (2024). Demographic Pricing in the Digital Age: Assessing Fairness Perceptions in Algorithmic versus Human-Based Price Discrimination. Journal of the Association for Consumer Research, 9, 257 - 268. https://doi.org/10.1086/729440.

Wachter, S., Mittelstadt, B., & Russell, C. (2020). Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. ArXiv, abs/2005.05906. https://doi.org/10.2139/ssrn.3547922.

Varona, D., & Suárez, J. (2022). Discrimination, Bias, Fairness, and Trustworthy AI. Applied Sciences. https://doi.org/10.3390/app12125826.

Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/2939672.2945386.

Nachbar, T. (2020). Algorithmic Fairness, Algorithmic Discrimination. Artificial Intelligence - Law.

Wang, X., Wu, Y., Ji, X., & Fu, H. (2024). Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1320277.

Tesink, V., Douglas, T., Forsberg, L., Ligthart, S., & Meynen, G. (2023). Neurointerventions in Criminal Justice: On the Scope of the Moral Right to Bodily Integrity. Neuroethics, 16, 1-11. https://doi.org/10.1007/s12152-023-09526-1.

Smolenski, J. (2024). The foundations of informed consent and bodily self-sovereignty: a positive suggestion.. Monash bioethics review. https://doi.org/10.1007/s40592-024-00203-4.

Harbinja, E., Edwards, L., & McVey, M. (2023). Governing ghostbots. Comput. Law Secur. Rev., 48, 105791. https://doi.org/10.1016/j.clsr.2023.105791.

Sullivan, C., & Stalla-Bourdillon, S. (2015). Digital identity and French personality rights - A way forward in recognising and protecting an individual's rights in his/her digital identity. Comput. Law Secur. Rev., 31, 268-279. https://doi.org/10.1016/J.CLSR.2015.01.002.

Solove, D. (2022). The Digital Person. . https://doi.org/10.18574/nyu/9780814708965.001.0001.

Augustian, A. (2024). The Role of Personality Rights in Indian Law: Lessons from Jackie Shroff's Legal Battle. Trends in Intellectual Property Research. https://doi.org/10.69971/tipr.1.2.2023.13.

De Miguel Asensio, P. (2022). Protection of Reputation, Good Name and Personality Rights in Cross-Border Digital Media. GRUR International. https://doi.org/10.1093/grurint/ikac090.

Sayed, A. (2024). Legal Protection of Personal Images in the Era of Modern Technology 'Comparative Study'. International Journal of Religion. https://doi.org/10.61707/kz59nw52.

Chu, C., Nyrup, R., Leslie, K., Shi, J., Bianchi, A., Lyn, A., McNicholl, M., Khan, S., Rahimi, S., & Grenier, A. (2022). Digital Ageism: Challenges and Opportunities in Artificial Intelligence for Older Adults. The Gerontologist, 62, 947 - 955. https://doi.org/10.1093/geront/gnab167.

Tacheva, J., & Ramasubramanian, S. (2023). AI Empire: Unraveling the interlocking systems of oppression in generative AI's global order. Big Data & Society, 10. https://doi.org/10.1177/20539517231219241.

Brock, J., & Von Wangenheim, F. (2019). Demystifying AI: What Digital Transformation Leaders Can Teach You about Realistic Artificial Intelligence. California Management Review, 61, 110 - 134. https://doi.org/10.1177/1536504219865226.

AI Law Model for Ethical Legislation: Strategic Recommendations for The Regulation of Artificial Intelligence

Downloads

Published

October 20, 2025

License

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Details about this monograph

ISBN-13 (15)

978-1-0690482-5-7

How to Cite

Oleksii Kostenko. (2025). AI LAW MODEL FOR ETHICAL LEGISLATION: STRATEGIC RECOMMENDATIONS FOR THE REGULATION OF ARTIFICIAL INTELLIGENCE. SciFormat Publishing Inc. https://doi.org/10.69635/978-1-0690482-5-7