A look back at the speech given by our AI Program Director at the Cybersec Cloud Forum on the theme:
‘Artificial intelligence: what risks and what protection can we expect from this technology? What are the risks associated with generative AI, and what countermeasures need to be put in place?’ alongside Fabrice Frossard.
During this workshop, Andrzej Neugebauer highlighted the major issues relating to cybersecurity in the face of generative AI. In a world where artificial intelligence is evolving at a breakneck pace, it is essential to understand both the opportunities and the threats that this technology represents.
Generative AI: Opportunity or Threat?
Generative AI is a powerful technology that can be used for both beneficial and malicious purposes. It can be used to improve cybersecurity, but it can also become a formidable tool for cybercriminals.
Threats heightened by generative AI
- Ultra-realistic phishing: Creation of error-free fraudulent e-mails that perfectly mimic a victim's communication style.
- Deepfake audio and video: Identity theft to mislead companies and institutions.
- Automated attacks: generation of malicious scripts from simple natural language commands.
- Massive cyber attacks: simultaneous exploitation of large-scale vulnerabilities, with an alarming success rate (87% according to a Stanford study).
How can we improve safety?
- User awareness and training: the first line of defence against attacks remains human vigilance
- Multi-factor authentication (MFA): additional authentication methods (such as tokens or biometrics) drastically reduce the risk of malicious access.
- Behavioural analysis and real-time anomaly detection
- Reinforcing data security
- Regular vulnerability tests and updates
- Using AI for cybersecurity: intrusion prevention systems (IPS), intelligent firewalls enhanced with AI, SIEM platforms.
Intrusion Prevention Systems (IPS), AI-enhanced intelligent firewalls, SIEM platforms. - Network segmentation and strict access policies
- Collaboration for improvement: pooling knowledge on types of attack and vulnerabilities, so as to have tools capable of collecting, analysing and processing this information.
With OpenLLM France / Europe, we are developing LUCIE, which aims to develop a sovereign and secure AI in opposition to the closed models of GAFAM.