top of page

A statement from the DGSI warns of the risks posed by AI to companies

  • Writer: Victor Fersing
    Victor Fersing
  • 3 days ago
  • 2 min read

How does AI pose a threat to the security of French companies?


In a report published last December, the DGSI sounded the alarm regarding the risks posed by the implementation of artificial intelligence in French companies.


Using three French companies as examples, the intelligence service has identified three major sources of risk:


1️⃣ The translation of confidential data

AI systems such as ChatGPT rely on the data provided to them, which they may subsequently spread. The DGSI points out that "the disclosure of internal company information to a generative AI tool, particularly if it involves sensitive data, poses a significant risk of that information being reused"


The DGSI also highlights the issue of sovereignty, as some AI systems store this data abroad, particularly in the United States, whose laws may have extraterritorial reach.


2️⃣ Loss of control

A company may place excessive trust in its AI systems. However, these systems are far from flawless, as they generate responses based on statistical averages of information which, in turn, are often biased.


Furthermore, "AI systems can reproduce or amplify biases present in training data, which can lead to unfair or discriminatory decisions".


What is the risk? A loss of human oversight, an over-reliance on imperfect tools, and an inability to understand the decisions these systems make.


3️⃣ Deepfakes

The DGSI cites the case of a French industrial firm that fell victim to a deepfake scam. A manager received a video call from the group’s director, who asked him to transfer funds to a bank account. Surprised by the request, the manager ended the call and the attempted scam was detected.


But other examples do not have such a happy ending, as in Hong Kong, where a multinational company was defrauded of $26 million using a similar method: an invitation to a video conference, deepfakes of colleagues, a request for a transfer, and then the scammers disappeared.


DGSI's recommendations to reduce these risks:


  • Set out the terms of use for the tool in a set of guidelines

  • Encourage the use of nationally-developed generative AI

  • Prioritise the use of AI at local level

  • Provide regular training for teams on the use of AI

  • Remain vigilant against the manipulation of information and biased responses.


At OFF, we would add a few points to this list, as it is not merely a question of security.


The widespread adoption of AI within a company often appears attractive in the short term in terms of productivity… but this reliance on AI often leads to a loss of skills among staff and greater dependence on tools that are beyond their control.


By Victor Fersing

Comments


bottom of page