Artificial Intelligence Tag

Yeah!
Artificial Intelligence Keynote Speakers

The rise of powerful Artificial Intelligence tools in 2023 has sparked as much excitement as it has fear, so much so that it was a major topic of discussion recently at the World Economic Forum’s 54th annual meeting in Davos. What most concerned attendees was how public trust can be restored in a world where machine-generated content is beginning to muddy the waters of information. While many vital insights were brought up, it’s important to remember that the core of this issue is humans, not machines. Thanks to advancements in cryptography and identity verification, the perfect vessel to solve these concerns is already possible.

The Rise Of The Machines

2023 saw an array of massive advancements being released in the fast-growing field of AI. Services like ChatGPT and Midjourney have allowed regular people all over the world to have access to information, insight, and content creation on a scale and of a quality never before offered by automation. This has already led to some powerful benefits for the industry and has many excited for what could still be coming in 2024 and beyond.

However, these developments have a darker side, such as their propensity to enable misinformation, deep fakes, and fraud. For example, voters in New Hampshire were targeted with robocalls just before last week’s presidential primaries. These fake calls featured an AI-generated message that sounded just like US President Joe Biden, urging them to stay home and not vote. This event comes as just one practical instance of what is concerning more and more people. Specifically, this technology will increasingly be used to spread falsehoods and manipulate public perception.

The prevalence of this issue is well documented in the European Union’s second annual disinformation report. The investigation outlines many problematic uses of computer-generated information and media that have been maliciously released. This includes spreading false news surrounding critical political leaders and world events, disseminating content targeting marginalized groups, and creating images and videos of major celebrities.

It isn’t just nefarious actors who use this technology in ways the public doesn’t approve of, either. Even major industry players have been accused of pushing out machine-generated content. Just look at the hot water popular publication Sports Illustrated found itself in recently after it was caught publishing AI-generated content from fake authors — highlighting that even business leaders are susceptible to being a part of the problem.

Davos Takes Notice

These are just a handful of stories, but they point to a larger picture. Given the potential for abuse, the public is understandably wary, if not outright anxious, about what AI can do. The topic has become so pervasive that the WEF’s annual meeting in Davos was themed around “Rebuilding Trust,” with leaders from all over the globe weighing in on the nature of this threat and what could be done about it. Critical insights from global heads of industry and government included the need for transparency, communication, and education as core principles that can be used to rebuild public trust. As Professor Klaus Schwab, Chairperson of the World Economic Forum, noted:

“Open, transparent conversations can restore mutual trust between individuals and nations who, out of fear for their own future, prioritize their own interests. The resulting dynamics diminish hope for a brighter future. To steer away from crisis-driven dynamics and foster cooperation, trust, and a shared vision for a brighter future, we must create a positive narrative that unlocks the opportunities presented by this historic turning point.”

The sentiments coming out of Davos are valuable, but a practical solution that can be deployed in the near future has largely been overlooked. While industrial cooperation and government regulation are essential, everyone must remember that the problem isn’t this new technology; it’s people.

Despite fears of what automation could bring, up until now, every example of AI causing issues has led back to the people who deployed it, the platforms that spread it, and the users who consumed it. At every level, it should be clearer who this content is coming from and what their authority over it is. This is, fortunately, a problem that can be solved.

A Digital Solution For A Human Problem

Currently, there’s no standardized way to confirm where most media online comes from. The same goes for many online personas who hide behind an avatar with little clarity about who they are. It doesn’t have to stay that way, though, if there were to be a system of digital identities and credentials that could be independently verified in an unfalsifiable way. If such an ID were utilized ubiquitously across business, social media, and news reporting, it could go a long way in combating much of the misinformation and false content.

How is this possible? By implementing online profiles that follow a user anywhere they go across the internet, a single pseudonymous identity can be created to confirm they are who they say they are. When these profiles are set up, new applicants could perform a one-time “humanity check” that can leverage biometrics, existing documentation, or other verifying information to prove who they are. They can then be granted a “confirmed human” credential linked to this ID. Every step is cryptographically secure to eliminate the possibility of forgery. Now, any service that a given ID interacts with can instantly see this is the verified human they claim to be, not a bot or imposter.

For example, my employer, Unstoppable Domains, offers a profile feature enabling users to link various blockchain assets, including NFTs, awards, and educational credentials — helping to build greater trust in interactions. When it comes to digital IDs, one of the major boons offered by decentralized technology is the ability to verify while maintaining a level of anonymity. Platforms like Nuggets and Polygon ID deliver decentralized identity solutions backed by Zero Knowledge Proofs (ZKPs). ZKPs offer cryptographic proof, verifying data without revealing the underlying information. This novel tech serves as a potent defense against fraud and empowers individuals to control precisely which information is disclosed, enhancing privacy and agency.

This is a great step forward for user profiles, but what about the content itself? These IDs can address this as well. Any created content can be embedded with a cryptographic “watermark” proving where it originated. Just like with user profiles, platforms can instantly see this confirmation or lack thereof. Any media missing the correct credentials will be immediately labeled as suspicious or filtered out entirely, depending on the needs of the service.

Fox News has already taken steps to build a system to help identify deepfakes and fake news. Dubbed Verify, the system enables users to confirm the content is authentic and indeed originated from the given source.

AI-generated content could also be tagged as such in this system. Instead of outright banning AI use, these agents could be given their own IDs. It is not a means to trick people but to identify them and their content as machine-generated. Publications and social platforms could allow or ban such material, but it would no longer be unclear where it was generated from. This also would enable the public to easily parse out what they consume and always have context for its origin.

Furthermore, embracing such an architecture stands to significantly help businesses bring back customer trust. AI isn’t going away, but the recent IBM Global AI Adoption Index 2023 reveals that about 85% of those in the IT industry firmly believe that consumers will heavily prefer companies implementing transparent and ethical AI practices. Utilizing digital IDs stands to be a major cornerstone of that.

It’s Time To Act

One way or another, the issue of trust raised by AI will need to be addressed if the technology continues to be implemented across so many industries. Digital IDs and humanity checks may not be the only tools that will be used to bring back public faith, but they’re the ones at our disposal right now. Industry leaders and governments should take note, and soon, because there isn’t any time to lose, given the stakes presented in 2024 and likely beyond.

Sandy Carter, Contributor, is COO at Unstoppable Domains, and Alumni of AWS and IBM.

Reprinted from Forbes Digital Assets, January 30, 2024. 

As a healthcare professional, you might agree that something has to change in our healthcare system. While we could debate public policy, insurance carriers and the law, I’ll make a case there are significant steps in technology, which could fundamentally change healthcare in the US and the rest of the world.

These days Artificial Intelligence/AI has become part of popular press articles. You might have seen the singer Common talking about AI on a recent Microsoft TV ad. As consumers we experience the power of Siri, Alexa or Google to recognize speech and if you’re on Facebook, we’ve how well facial recognition can work. Recently an AI powered application beat a human at the game of Go, which many thought would take another ten years.

In the world of medicine we are seeing similar advances in the potential for AI to provide the precision diagnostic capability of the world’s best ophthalmologist. One of my former students, Dr. Anthony Chang has taken his considerable knowledge and network and launched the AIMed conference series because he believes it’s time to bring the world of healthcare closer to AI, Big Data and Cloud Computing.

Each Siemens, GE, Beckman, Abbott, Illumine, Phillips machine speaks it’s own language. If you’ve been in computing a long time you’ll recognize we used to be this way.

But anyone in the world of machine learning and AI will tell you the more data we can learn from will result in more accurate analytics. So where is all of this data?

Most hospitals have over 1000 machines: MRI scanners, CAT scanners, gene sequencers, drug infusion pumps, blood analyzers, etc. Unfortunately these machines are all balkanized. Each Siemens, GE, Beckman, Abbott, Illumine, Phillips machine speaks it’s own language. If you’ve been in computing a long time you’ll recognize we used to be this way. Our AS/400, Unix, Mainframe, client-server applications existed in their own world, able to only communicate with their own tribe.

There are today about 500 hospitals around the world and on average there are 1,000 machines in each hospital. What if we could connect them all?

In the 1990s this all began to change. The creation of the Internet based on TCP/IP changed everything because finally we could have different kinds of machines talk to each other. In the mid 1990s when the Internet had roughly 1,000,000 machines connected companies like Netscape, eBay and Amazon were created. At 10,000 machines no one would have cared, but at 1,000,000 it mattered. Fast-forward to today with billions of machines connected our experiences with buying books, making travel reservations or moving money is dramatically different.

Now consider the small world of pediatric hospitals. There are today about 500 hospitals around the world and on average there are 1,000 machines in each hospital. What if we could connect them all? Maybe like the consumer Internet with 500,000 machines connected healthcare could become dramatically different. Sadly, most of the attention today is on EMR/HER applications, where doctors spend their evenings and weekends typing data into these ancient pre-Internet applications. But the massive amounts of data, which will power AI applications is not there. The data is in the machines: the blood analyzers, gene sequencers, CAT scanners, and ultrasounds. Maybe if we could just start by connecting the machines in all the pediatric hospitals we could make a difference in the lives of the 2.2B children in the world.

Timothy Chou was was one of only six people to ever hold the President title at Oracle. He is now in his 12th year teaching cloud computing at Stanford and recently launched another book, Precision: Principals, Practices and Solutions for the Internet of ThingsInvite Timothy to keynote your next event!

The day began as every other, but as soon as I arrived at work, I received an email: “Happy One Year Anniversary!” I love working at Amazon Web Services because I am on a continuous learning path. I run the group that helps to migrate Enterprise workloads onto the AWS Cloud. I have the opportunity to work with some of our largest customers and partners like VMware, Microsoft, and SAP.

Learning is a core part of my role at AWS, but it is also about sharing those lessons. To celebrate my one-year anniversary, I want to share some of the things I’ve learned at AWS.

6 Lessons I’ve learned in the past year:

1.    Customers first. This lesson is still the most important in my journey at AWS. Customers are always first. I love the story of Low Flying Hawk. At AWS, we genuinely read the forums and listen to customers. A person with the alias of Low Flying Hawk was constantly suggesting new features in one such forum, and the team came to look forward to that feedback so much that they would ask in meetings “What would Low Flying Hawk say?” Low Flying Hawk didn’t spend a huge sum with AWS, but this person’s input was so valued that we recently named an Amazon building Low Flying Hawk to honor the importance of what those comments and requests represent to AWS. It is real and lived hourly. Great products and services come from deeply understanding your customer. If we jump straight to a solution without spending time thinking about customer needs, we limit our options for inventing a delightful experience for customers.

2.    Learn from others. At AWS, we don’t present PowerPoints but we read six-page written narratives. To learn more about this, Jeff actually discusses it in the Amazon Shareholder Letter here. It was a bit hard to get used to at first, sitting quietly and reading, but now I get the importance of this process. Once we are done reading, anyone and everyone can ask a question or make a comment. It is a conversation starter to achieve clarity and customer focus. We read, discuss, and debate. We revise and make the idea better with each iteration. We push ourselves to invent on behalf of the customer. It is a pure learning experience that makes the ideas better and stronger. For example, our team had dreamed big on a new approach, but through our narrative process we found out that the technical approach just wasn’t going to work. And that’s what you have to do. Dream big but iterate and go deep, get the data, and figure out if the idea really has legs

3.    It’s usually the second, or third idea. One of my customers, Bridget Frey from Redfin, shared with me an illustration and story from one of her favorite graphic novelists, Kazu Kibuishi. His drawings are haunting, beautiful and complex. But they don’t start out that way. In his creative process, he forces himself to use the least expensive notebook paper for his initial drawings, to remind himself that his early drafts are disposable. It’s a reminder that innovators can fall into this trap, where we’re too precious in our designs. We refine a single approach, rather than starting anew with different ideas on different sheets of paper. Innovation doesn’t usually feel like one inspiring idea after another. But as you experiment you test new theories. You fall in love with an idea, you give it everything. And then you realize you were wrong, and you have to be ready to pick yourself up and fall in love with the next idea.

4.    It’s ok to fail! My team had a tough problem to solve and at first, we took a pure tech approach. We tried to completely automate a migration task, thinking technology was all people would need to get the job done. We thought we were being innovative – but we were building something that our customers couldn’t leverage until the culture inside their organizations was addressed. So, we started offering Digital Innovation workshops. A big part of innovation is just getting really good at learning from the ideas that don’t work, so you have space for the ones that will.

5.    AWS democratizes technology! I knew that the AWS Cloud was very powerful but I had no idea how much it was impacting customers. Recently at the San Francisco Summit, I heard the story of how it is powering the leaderboard of Peloton, helping Cerner in healthcare, and even making it easier to use machine learning with Adobe. Things that AWS has created like Amazon SageMaker are just too cool! Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning. Machine learning often feels a lot harder than it should to most developers because the process to build and train models, and then deploy them into production, is too complicated and too slow. Or consider the new launch templates developed from 100s of customers wanting to make it easier. Launch templates for Amazon EC2 have made it flexible and easy for our developers. With launch templates you can apply access controls, ensure tagging policies, and even make sure that developers in the organization are only launching the most recent, patched version of your AMIs.

6.    Make history. The Amazon mantra is work hard, have fun, and make history. I am motivated to help our customers make history. I’ve learned a lot as I have gained more experience working with a great team on great projects. For example, working on the partnership with VMware to reduce the cost of hybrid cloud while offering unparalleled access to the cloud will be one of those partnerships of a lifetime.

Sandy Carter is VP of Amazon web services and Forbe’s 2016 Digital Influencer & CNN’s Top 10 Most Powerful Women in Tech; recipient of more than 25 social media awards for innovative and successful implementation of Social Business techniques.  Invite Sandy to your next event!

*For a complete list of speakers on this topic "contact us”.