Trust Tag

Yeah!
Trust Keynote Speakers

The rise of powerful Artificial Intelligence tools in 2023 has sparked as much excitement as it has fear, so much so that it was a major topic of discussion recently at the World Economic Forum’s 54th annual meeting in Davos. What most concerned attendees was how public trust can be restored in a world where machine-generated content is beginning to muddy the waters of information. While many vital insights were brought up, it’s important to remember that the core of this issue is humans, not machines. Thanks to advancements in cryptography and identity verification, the perfect vessel to solve these concerns is already possible.

The Rise Of The Machines

2023 saw an array of massive advancements being released in the fast-growing field of AI. Services like ChatGPT and Midjourney have allowed regular people all over the world to have access to information, insight, and content creation on a scale and of a quality never before offered by automation. This has already led to some powerful benefits for the industry and has many excited for what could still be coming in 2024 and beyond.

However, these developments have a darker side, such as their propensity to enable misinformation, deep fakes, and fraud. For example, voters in New Hampshire were targeted with robocalls just before last week’s presidential primaries. These fake calls featured an AI-generated message that sounded just like US President Joe Biden, urging them to stay home and not vote. This event comes as just one practical instance of what is concerning more and more people. Specifically, this technology will increasingly be used to spread falsehoods and manipulate public perception.

The prevalence of this issue is well documented in the European Union’s second annual disinformation report. The investigation outlines many problematic uses of computer-generated information and media that have been maliciously released. This includes spreading false news surrounding critical political leaders and world events, disseminating content targeting marginalized groups, and creating images and videos of major celebrities.

It isn’t just nefarious actors who use this technology in ways the public doesn’t approve of, either. Even major industry players have been accused of pushing out machine-generated content. Just look at the hot water popular publication Sports Illustrated found itself in recently after it was caught publishing AI-generated content from fake authors — highlighting that even business leaders are susceptible to being a part of the problem.

Davos Takes Notice

These are just a handful of stories, but they point to a larger picture. Given the potential for abuse, the public is understandably wary, if not outright anxious, about what AI can do. The topic has become so pervasive that the WEF’s annual meeting in Davos was themed around “Rebuilding Trust,” with leaders from all over the globe weighing in on the nature of this threat and what could be done about it. Critical insights from global heads of industry and government included the need for transparency, communication, and education as core principles that can be used to rebuild public trust. As Professor Klaus Schwab, Chairperson of the World Economic Forum, noted:

“Open, transparent conversations can restore mutual trust between individuals and nations who, out of fear for their own future, prioritize their own interests. The resulting dynamics diminish hope for a brighter future. To steer away from crisis-driven dynamics and foster cooperation, trust, and a shared vision for a brighter future, we must create a positive narrative that unlocks the opportunities presented by this historic turning point.”

The sentiments coming out of Davos are valuable, but a practical solution that can be deployed in the near future has largely been overlooked. While industrial cooperation and government regulation are essential, everyone must remember that the problem isn’t this new technology; it’s people.

Despite fears of what automation could bring, up until now, every example of AI causing issues has led back to the people who deployed it, the platforms that spread it, and the users who consumed it. At every level, it should be clearer who this content is coming from and what their authority over it is. This is, fortunately, a problem that can be solved.

A Digital Solution For A Human Problem

Currently, there’s no standardized way to confirm where most media online comes from. The same goes for many online personas who hide behind an avatar with little clarity about who they are. It doesn’t have to stay that way, though, if there were to be a system of digital identities and credentials that could be independently verified in an unfalsifiable way. If such an ID were utilized ubiquitously across business, social media, and news reporting, it could go a long way in combating much of the misinformation and false content.

How is this possible? By implementing online profiles that follow a user anywhere they go across the internet, a single pseudonymous identity can be created to confirm they are who they say they are. When these profiles are set up, new applicants could perform a one-time “humanity check” that can leverage biometrics, existing documentation, or other verifying information to prove who they are. They can then be granted a “confirmed human” credential linked to this ID. Every step is cryptographically secure to eliminate the possibility of forgery. Now, any service that a given ID interacts with can instantly see this is the verified human they claim to be, not a bot or imposter.

For example, my employer, Unstoppable Domains, offers a profile feature enabling users to link various blockchain assets, including NFTs, awards, and educational credentials — helping to build greater trust in interactions. When it comes to digital IDs, one of the major boons offered by decentralized technology is the ability to verify while maintaining a level of anonymity. Platforms like Nuggets and Polygon ID deliver decentralized identity solutions backed by Zero Knowledge Proofs (ZKPs). ZKPs offer cryptographic proof, verifying data without revealing the underlying information. This novel tech serves as a potent defense against fraud and empowers individuals to control precisely which information is disclosed, enhancing privacy and agency.

This is a great step forward for user profiles, but what about the content itself? These IDs can address this as well. Any created content can be embedded with a cryptographic “watermark” proving where it originated. Just like with user profiles, platforms can instantly see this confirmation or lack thereof. Any media missing the correct credentials will be immediately labeled as suspicious or filtered out entirely, depending on the needs of the service.

Fox News has already taken steps to build a system to help identify deepfakes and fake news. Dubbed Verify, the system enables users to confirm the content is authentic and indeed originated from the given source.

AI-generated content could also be tagged as such in this system. Instead of outright banning AI use, these agents could be given their own IDs. It is not a means to trick people but to identify them and their content as machine-generated. Publications and social platforms could allow or ban such material, but it would no longer be unclear where it was generated from. This also would enable the public to easily parse out what they consume and always have context for its origin.

Furthermore, embracing such an architecture stands to significantly help businesses bring back customer trust. AI isn’t going away, but the recent IBM Global AI Adoption Index 2023 reveals that about 85% of those in the IT industry firmly believe that consumers will heavily prefer companies implementing transparent and ethical AI practices. Utilizing digital IDs stands to be a major cornerstone of that.

It’s Time To Act

One way or another, the issue of trust raised by AI will need to be addressed if the technology continues to be implemented across so many industries. Digital IDs and humanity checks may not be the only tools that will be used to bring back public faith, but they’re the ones at our disposal right now. Industry leaders and governments should take note, and soon, because there isn’t any time to lose, given the stakes presented in 2024 and likely beyond.

Sandy Carter, Contributor, is COO at Unstoppable Domains, and Alumni of AWS and IBM.

Reprinted from Forbes Digital Assets, January 30, 2024. 

*For a complete list of speakers on this topic "contact us”.