As the idea of a 1984 society becomes less conspiratorial, cancellation based on false information has taken new heights. While the threat of big government looms, one of the world’s most successful companies now holds a key to consumer control.
Three months ago, an engineer at Microsoft was digitally exiled from his Amazon-run home. The owner’s smart doorbell was programmed to say “Excuse me, can I help you?” But an Amazon driver, allegedly wearing headphones, claimed to have heard a racial slur when delivering a package to the home. Amazon then shut down the homeowner’s smart products for a week following the allegation. No one was home during the time of the allegation and after surveillance videos were reviewed, Amazon confirmed the homeowner did not act inappropriately. This situation exemplifies how consumers can be disbarred via fake allegations—considered guilty until proven innocent. And in the new age of technology, it is possible for similar events to occur with the use of AI-generated tools.
Geoffrey Hinton—known as the “Godfather of AI” and winner of the Turing Award, the top award in the computer science field—created the foundation of neural networks that led to the development of tools like ChatGPT. In May, Hinton resigned from Google to share his concerns about the “existential risk” AI poses, where an oversaturation of fake audio, video, and text causes us “not able to know what is true any more.” ChatGPT, an AI chatbot that uses deep learning to generate human-like, conversational text, has risen to a point of ubiquity in the past year. Experts warn of the possible spreading of misinformation, cyberattacks, and plagiarism.
AI voice cloning, for example, can be used by simply uploading an audio clip found online to replicate a particular voice. This technology has led to an increase in realistic sounding scam calls from loved-ones. Deepfake video technology has the ability to misrepresent public figures or create fictitious people, distorting fact from fiction. Midjourney and DALL-E, AI image generators, have proven how easily people on the internet can be fooled by fake imagery. The potential misuse of such tools has led to a call for responsible guardrails and industry best practices.
Sam Altman, CEO of OpenAI, testified at a congressional hearing in May for regulatory intervention. Altman proposed a federal agency for AI technology safety standards. Christina Montgomery, chief privacy officer of IBM, argued for a “precision regulation” approach. She explained to Congress, “This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself.”
After the release of ChatGPT-4, numerous AI researchers and tech leaders called for “AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” The open letter continued, “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
Some tech leaders argue that, to stay ahead of China, the United States should let its own tech companies take the lead. On the other hand, some AI creators are not exactly sure how to manage their tech as it is beyond their understanding. This begs the question, how can a company reliably regulate itself if it is unable to control its technology?
While some zealous technocrats praise AI as the solution to many of humanity’s woes, we’d be well-served to remember what Thomas Sowell once famously said: “There are no solutions, there are only trade-offs.” Surely, there are benefits to be realized with specific applications of AI, but we are already seeing some truly dystopian consequences when we allow unfettered technological experimentation to run its course. Lawmakers and industry leaders should heed this maxim and promote policy and actions that erect responsible guardrails to promote the good and limit the bad of this technology.
We must appropriately balance the benefits of new and emerging technologies with the threats to consumers and their privacy. Now they have the potential to be used against leaders and ordinary people. Misinformation can even get you locked out of your home. As technology and innovation are welcomed into our society, it is crucial to align regulatory frameworks with ethical AI principles.