The following is a guest post Yannick SchradeCEO and co-founder of Arcium.
Oracle AI CTO Larry Ellison shared his vision for a global network of AI-powered surveillance to keep the public safe. “Best course of action”critics were quick to compare it to George Orwell's 1984 and describe his business pitch as dystopian. Mass surveillance is an invasion of privacy; negative psychological impactthreaten people from. participating in protests.
But the most worrying aspect of Ellison's vision for the future is that mass surveillance using AI is already a reality. During this year's Summer Olympics, the French government 4 technology companies – Videtics, Orange Business, ChapsVision, Wintics – conduct video surveillance across Paris using AI-powered analytics to monitor behavior and alert security.
Large-scale surveillance using AI becomes more realistic
This controversial policy was made possible by law Passed in 2023, newly developed AI software will be allowed to analyze public data. On the other hand, France first country of the european union Video analytics is not new to legitimizing AI-powered surveillance.
british government First CCTV installed Cities in the 1960s and the present in 2022, 78 out of 179 OECD countries It used AI for public facial recognition systems. As AI advances and enables more accurate and large-scale information services, the demand for this technology is expected to increase further.
Historically, governments have used advances in technology to upgrade their mass surveillance systems, often contracting out the dirty work to private companies. In the case of the Paris Olympics, technology companies were given the power to test their AI training models at large-scale public events, accessing information about the location and behavior of millions of individuals attending the Games and going about their daily lives. Now accessible. life in the city.
Privacy vs. Public Safety: The Ethical Dilemma of AI Surveillance
Privacy advocates like me would argue that video surveillance prevents people from living freely and without fear. Policymakers who employ such tactics may argue that they are being used in the name of public safety. Surveillance can also keep authorities in check, for example by requiring police officers to wear body cameras. The question is whether tech companies should have access to public data at all, but also how securely sensitive information can be stored and transferred between parties.
This poses one of the greatest challenges of our generation. It's about storing sensitive information online and how that data is managed between different parties. No matter what governments or companies intend to collect personal data through AI surveillance, whether it is for public safety or smart cityrequires a secure environment for data analysis.
Distributed Confidential Computing: Solutions to AI Data Privacy
The Distributed Confidential Computing Movement (DeCC) presents a vision of how to address this problem. Many AI training models, Apple Intelligence being one example, use trusted execution environments (TEEs) that rely on supply chains with single points of failure that require third-party trust, from manufacturing to the certification process. I will. DeCC aims to eliminate these single points of failure and establish a decentralized trustless system for data analysis and processing.
Additionally, DeCC could potentially be used to analyze data without decrypting sensitive information. In theory, a video analytics tool built on the DeCC network could alert the parties monitoring the tool to security threats without exposing sensitive information about the individuals recorded.
A number of distributed secret computing techniques are currently being tested, including zero-knowledge proofs (ZKP), fully homomorphic encryption (FHE), and multiparty computing (MPC). All of these methods are essentially trying to do the same thing: verify sensitive information without disclosing sensitive information from either party.
MPC has emerged as a frontrunner for DeCC, enabling transparent payments and selective disclosure with the highest computational power and efficiency. MPC allows you to build multiparty execution environments (MXEs). A virtual encrypted execution container. You can run any computer program in a fully encrypted and confidential manner.
This enables both training using sensitive and isolated encrypted data and inference using encrypted data and encrypted models. Thus, in practice, facial recognition can be performed while keeping this data hidden from the parties processing that information.
The analysis gleaned from that data can be shared among various parties, such as security authorities. Even in a surveillance-based environment, it becomes possible to at least introduce transparency and accountability into the surveillance performed, while maintaining the confidentiality and protection of most data.
Although decentralized confidential computing technology is still in its development stage, its arrival brings to light the risks associated with trusted systems and provides alternative methods for encrypting data. At this point, machine learning is integrated into almost every field, from urban planning to healthcare to entertainment and more.
In each of these use cases, the training model relies on user data, and DeCC will be the basis for ensuring personal privacy and data protection going forward. To avoid a dystopian future, artificial intelligence must be decentralized.
🖥 Top Computing Cryptoassets
See all