An economics student from London travels to Menorca with five friends to celebrate the end of his exams. Before boarding the plane, he sends a photo of himself to the private Snapchat group that says: “I’m going to blow up the plane (I’m a member of the Taliban).” When the plane flew over France, British intelligence relayed the alleged threat to their Spanish counterparts, who sent two military planes to escort the flight to the island. Once there, the plane parks in an area away from the terminal and the travelers are disembarked one by one, identified and subjected to a baggage search with dogs and bomb squads.
The student is arrested and sleeps two days in a cell, before being released on bail. A year and a half later, he testified before the National Court, where he was accused of disturbing public order. The prosecution is asking for a fine of 22,500 euros and civil liability compensation of 94,782 euros, the bill for the F-18s. This isn’t about terrorism, but it’s not about the limits of humor either. This is an example of what happens when excessive surveillance is combined with racist automatisms in an international security context.
The risks of public Wi-Fi in airports
The prosecution requests that article 561 of the Penal Code. Sanctions anyone who provokes the mobilization of the police, assistance or emergency services through a false alarm of an accident or threat. But Aditya Verma, as the young man is called, did not post the photo on Twitter or on his Instagram account. He posted it on his private Snapchat group and none of his friends shared it. None of the recipients believed that Verma was carrying a bomb as they all boarded the plane with him. He says he did it because his friends regularly joke about his Indian origin and dark skin.
Civil Guard experts who examined their devices found anecdotal conversations on WhatsApp about the conflict between Pakistan and India and the possibilities of an Islamic State attack in that region, but “no connection with the radicalism or intentionality was observed. he.” The fact that the British secret services had access to his private joke leads the prosecutor to interpret it as a public communication. And the British security services do not specify how he achieved this.
The prosecutor assumes that the capture was made via the airport’s Wi-Fi network and was done legally. The two premises are interdependent. All airport Wi-Fi networks, including Gatwick Airport Wi-Fi, require a to log in where the terms and conditions of the service are accepted. For example, all communications will be open and will be subject to monitoring by agencies and authorities for security reasons. Airports are considered critical infrastructure and monitoring their utilities is a legitimate part of their security strategy. But it seems unlikely that a student using Snapchat would need Wi-Fi at their own city’s airport, and it’s impossible for them to accidentally connect automatically. Even if that were the case, Snapchat has its own security protocol.
Before Snowden, network communications were unprotected, allowing the British government’s communications headquarters and the U.S. National Security Agency to capture data on a massive scale. Today, most traffic is encrypted, thanks to the protocol called Transport Layer Security (TSL) and many messaging services. services, such as Signal or WhatsApp, are end-to-end encrypted. This means that the message leaves the sending phone encrypted and is decrypted on the final phone, remaining protected even in the unsecured or monitored Wi-Fi of an airport. Snapchat states that “snaps (photos) and chats, including voice and video, between you and your friends are private: we don’t analyze their content to create profiles or show you ads. This means that we generally don’t know what you say or post unless you ask us. The United Kingdom could now be an exception.
Reading encrypted messages is possible, but not everyone can do it. Specific hardware is required to intercept Wi-Fi signals and specialized software to capture data packets transmitted over the network. This would be incompatible with “the necessary publicity” required by the application of article 561 of the Penal Code. Under US law, Verma would have shared his joke with a “reasonable expectation of privacy.” In Europe, this wait would not be necessary, because we have a general data protection and civil rights regulation. But post-Brexit England does not have the same standards of citizen protection. The National Court could try a person in Spain under UK regulations.
Last October, in England, the Online Safety Act, which requires companies to analyze user messages to ensure they are not transmitting illegal material, including terrorist content or child pornography. The law does not specify how to do this, but failure to do so could result in criminal prosecution. The only solution without breaking encryption is to scan users’ devices to examine messages Before be sent.
This technology is called client-side analysis, also known as Chat Control. It is possible that the authorities read Verma’s joke and overreacted. It’s more likely that an automatic algorithm from Snapchat itself did this, and that a level of alarm was activated that justified the deployment without anyone being able to explain or verify the reason. The European Union is about to begin a trilogue on the European Commission regulation against child sexual abuse, which proposes to adopt this same technology. This case is just a small example of how dystopian its implementation can be.
The racism of a British algorithm
Here is my theory: a system of client-side analysis It detected key words – blowing up a plane, Taliban – in a sensitive context – airport – and, as the sender was an 18-year-old Indian, it raised the alarm to a level that intelligence services received as a terrorist alert, without delay. to contextualize it. Following protocol, they transmitted the alert to the Spanish Ministry of Defense which, with the plane being in full flight and without access or time for details, logically decided to take extreme precautions and accompany the flight until ‘to its destination. Once the threat is denied, they look for the person responsible to pay the bill.
Technically, the false alert regarding the planting of an explosive device was issued by the system, after it intercepted the private conversations of a British citizen on British soil and decided that a student with no criminal record and a chess enthusiast constituted a credible jihadist threat. color of his skin. Ironically, it’s the same stereotype that started the joke in the first place. Even the Defense said that the fine should be paid by the British services and not by Aditya Verma.
Instead of recognizing the bias in a system that should be corrected, taking into account the almost two million citizens of the same ethnic group who live in the United Kingdom, they preferred to prosecute the first victim of the abuse: a teenager who had so much has assimilated the racism of his environment who makes terrorist jokes before others do them.
Subscribe to continue reading
Read without limits