“The Regulation of Large Language Models (LLMs) in Regulation (EU) 2024/1689 on Artificial Intelligence and the GDPR”
Panagiota Kiortsi, Ioannis Igglezakis, Christos Feidas, Dimitrios Kosmopoulos, Georgios Thomopoulos
This study introduces the legal issues arising from the regulation of Large Language Models (LLMs) in Regulation (EU) 2024/1689 on Artificial Intelligence and the General Data Protection Regulation. It highlights that with technological advancements, the capabilities of dialogue models are continuously improving, allowing them to generate more accurate and complex responses. However, ethical and transparency issues remain regarding how they are used and the impacts they may have on society, especially concerning privacy protection and ensuring objectivity in the processing of personal data.”
Download the article (in Greek)
“Developing an Anti-Phishing Large Language Model: A Focus Group Study on Human, Technological, and Legal Factors”
George A. Thomopoulos, Panagiota Kiortsi, Damianos Dumi Sigalas, Dimitris Kosmopoulos, Ioannis Igglezakis, Christos Fidas
“Developing an Anti-Phishing Large Language Model: A Focus Group Study on Human, Technological, and Legal Factors” was accepted as full paper at “ISA 2024, 17-19 July, The 15th International Conference on Information, Intelligence, Systems and Applications“, held at Chania, Greece. The paper, as part of the AILA project, aims to specify and validate an AI-driven multi factor (human, technology and legal) anti-phishing data model, with the implementation of focus group studies. The findings assist to provide human, technology, and legislative user model endpoints that will be identified and discussed for explicit and implicit user modelling, which will guide the development of the corresponding AI-driven user modelling and profiling mechanisms.