Author: admin

Yeah!
Keynote Speakers

Okay, enough.  Enough with the cell phone nonsense.  America, check that, the World needs a lesson in cell phone civility.

First of all let me say I’m a huge cell phone user.  I use my smart phone for texting, emailing, making reservations, driving directions, listening to music, and the other day actually made a call with it.  My cell phone has a hard drive big enough for 4000 songs…or one voice mail from my sister.  I’m NOT anti-cell phone…I’m anti-cell phone rudeness.  It’s getting out of hand and I think the reason is this whole self-absorption thing.  People love to hear themselves talk.

But I don’t like hearing you talk.  I don’t want to hear you argue with your wife because you think she’s messing around with Ralph (actually heard this in the San Antonio airport).  I don’t want to know that your underling better get his sales numbers up or you’re going to personally kick him in the a#% (Denver airport).  And I surely don’t want to know the color of your baby’s vomit (my grocery store).  These are PUBLIC places.  That is PERSONAL information.  This has not been helped with the advent of Bluetooth earpiece devices.  A device cleverly crafted to put the audio output in the inner depth of your ear so you can clearly hear, but placing the microphone nowhere near your mouth so you feel like shouting.  It would work perfectly if your mouth was on the side of your head, but not many people are built that way.  And here’s a tip for you if you’re a Bluetooth person:  while you’re talking on that thing do not make eye contact with someone.  I know this sounds odd, but if your lips are moving and you look at me, I’m gonna assume you’re talking to me.  It’s true, I shouldn’t tell a complete stranger “I love you too”, but she said it first.

I may have seen the most egregious violation of cell phone civility recently when a lady waiting to board a plane decided to talk to her birds back home.  For real.  One of them was apparently homesick for her, or it could’ve been the Bird Flu.  Those ailments are very hard to distinguish I’m told.

You can see where this is going.  She put her phone on “speaker” and then began squawking and cooing and shrieking to her aviary friends back at the house.  I wish this were a lie.  I’ve got a hundred witnesses that say it’s not.  Apparently these noises require animation to get them right, because she was gyrating in her chair like someone had rubbed Icy Hot in her underpants (I’ve got personal experience from junior high gym class on that one).  Everybody in the boarding area thought we were being punk’d.  I couldn’t identify what type of birds she was communicating with or what she was saying, but I think it had something to do with the neighbor’s cat.  Not sure on that one, all I know is that it was annoying and hilarious at the same time, a very difficult combination to achieve.  It was also something I wish I would have recorded and posted on YouTube because that baby would have instantly gone viral.

So what do you do when that knothead’s cell phone goes off during the wrong time, like in the middle of a speech or when you’re standing at the urinal (that guy better have a “hands free” device).  Here’s a suggestion:  join in.  That’s right, make it a conference call.  Get loud, argue, make suggestions, and invite others to join in as well.

But you better be bigger or faster than the guy taking the call.

I’m just sayin’.  Mark Mayfield

A Funny Speaker with a Serious Message

How do we double our revenues and quadruple our margins using software?

Based on an executive workshop held in Minneapolis, MN in 2019

If you’re the CEO or board member of a company that manufactures any healthcare, construction, agriculture, power generation, pharmaceutical or industrial machine you’ve probably heard about IoT, edge, AI, 5G and cloud computing. But why should you care? Why should your company care?

While finding ways to use technology to save money is always good, the bigger driver is using software to increase revenue. I’ll make the case as the manufacturer of construction, packaging, oil, gas, healthcare or transportation machines, you can double your revenues and quadruple your margins by building and selling digital service products. Furthermore, you’ll create a barrier that your competition will find difficult to cross.

Software Defined Machine

Next-generation machines are increasingly powered by software. Porsche’s latest Panamera has 100 million lines of code (a measure of the amount of software) up from only two million lines in the previous generation. Tesla owners have come to expect new features delivered through software updates to their vehicles. A software-defined automobile is the first car that will end its life with more features than it began. But it’s not only cars, healthcare machines are also becoming more software defined. A drug-infusion pump may have more than 200,000 lines of code, and an MRI scanner more than 7,000,000. A modern boom lift — commonly used on construction sites — has 40 sensors and three million lines of code, and a farm’s combine harvester has over five million. Of course, we can debate if this is a good measure of software, but I think you get the point: machines are increasingly software defined.

So, if machines are becoming more software defined, then the business models that applied to the world of software may also apply to the world of machines. In the rest of this article we’ll cover three business models.

Business Model 1: Product and Disconnected Digital Services

Early on in the software industry we created products and sold them on a CD; if you wanted the next product, you’d have to buy the next CD. As software products became more complex, companies like Oracle and SAP moved to a business model where you bought the product (e.g., ERP or database) together with a service contract. That service contract was priced at roughly 2% of the purchase price of the product per month. Over time, this became the largest and most profitable component of many enterprise software product companies. In the year before Oracle bought Sun Microsystems (when they were still a pure software business), they had revenues of approximately $15B, only $3B of which was product revenue, the other $12B (over 80%) was high-margin, recurring-service revenue.

But what is service? Is service answering the phone nicely from Bangalore? Is it flipping burgers at McDonald’s? The simple answer is no. Service is the delivery of information that is personal and relevant to you. That could be the hotel concierge telling you where to get the best Szechwan Chinese food in walking distance, or your doctor telling you that, based on your genome and lifestyle, you should be on Lipitor. Service is personal and relevant information.

I’ve heard many executives of companies who make machines say, “Our customers won’t pay for service.” Well, of course, if you think service is break-fix, then the customer clearly thinks you should build a reliable product. Remember Oracle’s service revenue? In 2004, the Oracle Support organization studied the 100 million requests for services from Oracle support and over 99.9% of those requests were answered with known information. Aggregating information for thousands of different uses of the software, even in a disconnected state, represented huge value over the knowledge of a single person in a single location. Service is not break-fix. Service is personal and relevant information about how to maintain or optimize the availability, performance or security of the product. All delivered in time and on time.

Business Model 2: Product and Connected Digital Services

The next major step in software business models was to connect to the computers that ran the software. This enabled even more personal and more relevant information on how to maintain or optimize the performance, availability and security of the software product. These digital services are designed to assist IT workers in maintaining or optimizing the product (e.g., database, middleware, financial application). For example, knowing the current patch level of the software enables the service to recommend only those relevant security patches be applied. Traditional software companies charge between 2 and 3% of the product price per month for a connected digital service. The advantage of this model is the ability to target the installed base of enterprises, which have purchased the product in the traditional Model 1.

Now let’s move to the world of machines. If a company knows both the model number and current configuration of the machine, as well as the time-series data coming from hundreds of sensors, then the digital service can be even more personal and relevant and allows the company to provide precision assistants for workers who maintain or optimize the performance, availability and security of the healthcare, agriculture, construction, transportation or water purification machine.

Furthermore, assume you build this digital service product and price it at just 1% of the purchase price of the product per month. If your company sells a machine for $200K and you had an installed base of 4,000 connected machines, you could generate $100M of high-margin, annual recurring revenue. And since digital service margins can be much bigger than product margins, companies who have moved to just 50/50 models (50% service, and 50% product) have seen their margins quadruple.

While this business model has been aggressively deployed in high tech, we are still in the early days with machine manufacturers. There are some early leaders. Companies like GE and a major elevator supplier derive 50% of their revenue from service. Voltas, a large HVAC manufacturer, is an 80/20 company — meaning they derive 20% of their revenue from services. In the healthcare area Abbott has introduced a digital service product called AlinIQ and Ortho Clinical is selling Ortho Care as an annual subscription service. While some of this is lower margin, human-powered, disconnected services the value of a recurring revenue stream is not lost on the early leaders.

Business Model 3: Product-as-a-Service

Once you can tell the worker how to maintain or optimize the security, availability or performance of the product, the next step is to simply take over that responsibility as the builder of the product. Over the last fifteen years we’ve seen the rise of Software-as-a-Service companies (SaaS) such as Salesforce.com, Workday and Blackbaud, which all deliver their products as a service. In the past seven years this has also happened with server hardware and storage products as companies like Amazon, Microsoft and Google provide compute and storage products as a service.

All of these new product-as-a-service companies have also changed the pricing to a per-transaction, per-seat, per-instance, per-month or per-year model. We’re likely to see the same with agricultural, construction, transportation and healthcare machines. Again there are some early examples Kaeser compressor is delivering air-as-service and AGCO is selling sugar cane harvesters by the bushel harvested. In the consumer world we’re all familiar with are Uber and Lyft, which provide transportation machines as a service — priced per ride. Of course, the most expensive operating cost of the ride is the human labor, so like those of us in high-tech software and hardware products, they are looking at replacing the human labor with automation.

So why should you care about IoT, edge, 5G, AI and cloud computing? Not because they are cool technologies, but because they will enable you to double your topline revenues and quadruple your margins with high quality recurring revenue. And by the way, all the while building a widening gap with your competition.

For more detail see the five keys to building digital service products and selling them.

By Rohit Bhargava

All week I have been reading stories of award winning creative advertising from Cannes (see all the winners here), and so I wanted to share several insights this week from a few of my favorite award winning campaigns this year.

Encouraging Black Travelers To #GoBackToAfrica
For years chants “go back to Africa” have been used by racists to marginalize people, but this campaign aims to turn the phrase around to make it an aspirational appeal for more African-Americans to come to visit Africa. It’s unusual, unexpected and helps people reevaluate Africa – something I’m planning to do myself for the next few weeks.

The Last Issue Ever: A Porn Magazine That Killed Itself
svg%3E

This Cannes award winning campaign was an ingenious idea from a Polish news site to buy a long running but ailing porn magazine and publish one final issue featuring stories of female empowerment instead of nude photos. The symbolic campaign was widely praised, disruptive and forced a much-needed discussion around gender stereotypes in Poland and beyond.

Child “Expert” Teaches Adults To Prepare For Gun Violence
Probably the most emotional ad I watched this week was one featuring an expert teaching workers how to survive an active shooter situation. That “expert” was a child and her training session puts a heartbreaking exclamation point on the reality of gun violence and spotlights the heartbreakingly necessary training that 95% of children in American schools now go through.

By Rohit Bharvaga

There is a moment during a show at the Austin City Limits where Ed Sheeran finally starts to perform his biggest hit at the time: Shape of You. About 10 seconds into the performance, he breaks a guitar string. Watching what he does next is a master class in stage presence, preparation and perhaps the perfect example of why he is such an engaging stage performer:

For anyone who has watched one of my keynote presentations, you already know I’m a big fan of Ed Sheeran. Usually when I share a story of a video of his performance, I talk about what I learned from it by watching a member of the audience just enjoy the show without distraction. I don’t, however, usually talk about just how good he is at marketing.

Right now, Ed Sheeran is everywhere. He is featured in the new film Yesterday. He’s launched everything from a new line of signature guitars to a Spanish restaurant in London. And today his new album featuring collaborations with a dozen top artists is getting released. It is likely to be one of the biggest hits of the year.

Along the way, he and his team are using some savvy marketing techniques to make the launch as big as possible. Here’s a deeper look at a few of them along with some marketing lessons they offer for the rest of us:

1. Condense the excitement.

svg%3E

Today in 32 cities around the world there will be a popup store for fans to go and buy all kinds of merchandise. He announced it to most fans just a few days ago, in some locations the store will only be open for a short window of time … 3:06pm to 9:06pm in New York, for example. All of which condenses fan excitement into a short window and is likely to create a huge viral sensation on social media and through word of mouth.

2. Find partners you believe in.

svg%3E

Ed Sheeran loves ketchup. In fact, he’s so much of a fan that he has a tattoo of the Heinz ketchup logo on his arm. So now Heinz is launching a limited edition version of their signature product named after him: “Edchup.” It’s a fun campaign, and works far better than many similar influencer marketing campaigns because it’s a sadly rare situation where the influencer in question loved the product long before he was paid to promote it.

3. Keep communications simple.

svg%3E

Emails from Ed are sparse, clear and written at almost an elementary school reading level. But they work because they are to the point and typically spotlight music and videos that his fans want to watch or listen to without getting in the way. It’s a perfect example of understanding what people really want and making sure you’re not overthinking your communications.

4. Create a sequence.

With this launch, as with his previous releases – every song is timed to go out in a drip campaign that allows people to watch and listen to one song, fall in love with it, and then get another one. Today for launch, his collaboration with Travis Scott and the official video is now live. Over the next several weeks, he will likely continue to release new videos for each song – creating another wave of interest and engagement from his fans.

5. Encourage your fans to obsess.

Every new song is also released with a lyric video, which helps fans learn the words to songs more quickly and encourages a short term obsession with the song until they learn it. The lyric video for his collaboration with Justin Bieber for the song I Don’t Care has been watched more than 100 million times – ensuring fans will be humming the song to themselves with perfect knowledge of all the lyrics.

6. Expand your audience.

Some of the most engaging videos Ed Sheeran uses are unscripted backstage collaborations with other artists in casual settings. His performance of 2002 with Anne-Marie and Your Songwith Rita Ora are both posted on their YouTube pages respectively, which allows him to not only create something that his fans will find, but also inspire fans of his collaborators to get to know and fall in love with him as an artist as well.

By Timothy Chou

It’s no secret that over the past 4 years there have been dramatic improvements in the usage of AI technology to recognize imagestranslate text, win the game of Go or talk to us in the kitchen. Whether it’s Google Translate, Facebook facial recognition or Amazon’s Alexa these innovations have largely been focused on the consumer.

On the enterprise side progress has been much slower. We’ve all been focused on building data lakes (whatever that is), and trying to hire data scientists and machine learning experts. While this is fine, we need to get started building enterprise AI applicationsEnterprise AI applications serve the worker not the software developer or business analysts. The worker might be an fraud detection specialist, a pediatric cardiologist or a construction site manager. Enterprise AI applications leverage the amazing amount of software that has been developed for the consumer world. These applications have millennial UIs and are built for mobile devices, augmented reality and voice interaction. Enterprise AI applications use many heterogeneous data sources inside and outside the enterprise to discover deeper insights, make predictions, or generate recommendations. A good example from the consumer world is Google Search. It’s an application focused on the worker, not the developer, with a millennial UI and uses many heterogeneous data sources. Open up the hood and you’ll see a ton of software technology inside.

With the advent of cloud computing, and continued development of open source software, building application software in the past 5 years has changed dramatically. It might be as dramatic as moving from ancient mud brick to modern prefab construction. As you’ll see we have a ton of software technology that’s become available. Whether you’re an enterprise building a custom application, or a new venture building a packaged application, you’ll need to do three things.

  1. Define the use-case. Define the application. Who is the worker? Is it an HR professional, reliability engineer or a pediatric cardiologist?
  2. The Internet is the platform. Choose wisely. We’ll discuss this more in depth in this article.
  3. Hire the right team. The teams will have a range of expertise including business analysts, domain experts, data scientists, data engineers, devops specialist and programmers.

For enterprises that are considering building scalable, enterprise-grade AI applications it’s never been a better time — there are hundreds of choices, many inspired by innovations in the consumer Internet. To understand the breadth I’ve arbitrarily created sixteen different categories, with a brief description and some example products. We’ll mix both open source software, which can run on any compute and storage cloud service along with managed cloud services.

  1. Compute & Storage Cloud Services provide compute and storage resources on demand, managed by the provider of the service. While you could build your application using on-premises compute & storage, it would both increase the number of technology decisions and raise the overall upfront cost both in capital equipment and people to manage the resources. Furthermore the ability to put a 1000 servers to work for 48 hours for less than a $1000 is an economic model unachievable in the on-premises world. Choices include but are not limited to AWS, Google Cloud, Microsoft Azure, Rackspace, IBM Cloud, AliCloud.
  2. Container Orchestration. VMWare pioneered the ability to create virtual hardware machines, but VMs are heavyweight and non-portable. Modern AI applications are using containers based on OS-level virtualization rather than hardware virtualization. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host file system, they are portable across clouds and OS distributions. Container orchestration orchestrates computing, networking, and storage infrastructure on behalf of user workloads. Choices include but are not limited to Kubernetes, Mesos, Swarm, Rancher and Nomad.
  3. Batch Data Processing. As data set sizes get larger, an application needs a way to efficiently process large datasets. Instead of using one big computer to process and store the data, modern batch data processing software allows clustering commodity hardware together to analyze large data sets in parallel. Choices include but are not limited to Spark, Databricks, Cloudera, Hortonworks, AWS EMR and MapR.
  4. Stream Data Processing. An AI application, which is designed to interact with near real-time data, will need streaming data processing software. Streaming data processing software has three key capabilities: publish and subscribe to streams of records; store streams of records in a fault-tolerant durable way and finally the ability to process streams of records as they occur. Choices include but are not limited to Spark Streaming, Storm, Flink, Apex, Samza, IBM Streams.
  5. Software Provisioning. From traditional bare metal to serverless, automating the provisioning of any infrastructure is the first step in automating the operational life cycle of your application. Software provisioning frameworks are designed to provision the latest cloud platforms, virtualized hosts and hypervisors, network devices and bare-metal servers. Software provisioning provides the connecting tool in any of your process pipelines. Choices include but are not limited to Ansible, Salt, Puppet, Chef, Terraform, Troposphere, AWS CloudFormation, Docker Suite, Serverless and Vagrant.
  6. IT Data Collect. Historically, many IT applications were built on SQL databases. Any analytic application will need the ability to collect data from a variety of SQL data sources. Choices include but are not limited to Teradata, Postgres, MongoDB, Microsoft SQL Server and Oracle.
  7. OT Data Collect. For analytic applications involving sensor data, there will be the need to collect and process time-series data. Products include traditional historians such as AspenTech InfoPlus.21, OSISoft’s PI, Schneider’s Wonderware and traditional database technologies extended for time-series such as Oracle. For newer applications product choices include but are not limited to InfluxDB.Cassandra, PostgreSQL, TimescaleDB, OpenTSDB.
  8. Message Broker. A message broker is a program that translates a message from a messaging protocol of the sender, to a messaging protocol of the receiver. This means that when you have a lot of messages coming from hundreds of thousands to millions of end points, you’ll need a message broker to create a centralized store/processor for these messages. Choices include but are not limited to Kafka, Kinesis, RabbitMQ, Celery, Redis and MQTT.
  9. Data Pipeline Orchestation. Data engineers create data pipelines to orchestrate the movement, transformation, validation, and loading of data, from source to final destination. Data pipeline orchestration software allows you to identify the collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. Choices include but are not limited to Airflow, Luigi, Oozie, Conductor and Nifi.
  10. Performance Monitoring. Performance of any application, including analytic applications requires real time performance monitoring to determine bottlenecks and ultimately be able to predict performance. Choices include but are not limited to Datadog, AWS Cloudwatch, Prometheus, New Relic and Yotascale.
  11. CI/CD. Continuous integration (CI) and continuous delivery (CD) software enables a set of operating principles, and collection of practices that enable analytic application development teams to deliver code changes more frequently and reliably. The implementation is also known as the CI/CD pipeline and is one of the best practices for devops teams to implement. Choices include but are not limited to Jenkins, Circle CI, Bamboo, Semaphore CI and Travis.
  12. Backend Framework. Backend frameworks consist of languages and tools used in server-side programming in an analytic application development environment. A backend framework is designed to speed the development of the application by providing a higher-level programming interface to design data models, handle web requests, and other commonly required features. Choices include but are not limited to Flask, Django, Pyramid, Dropwizard, Elixir and Rails.
  13. Front-end Frameworks. Applications need a user interface. There are numerous front end frameworks used for building user interfaces. These front end frameworks as a base in the development of single-page or mobile applications. Choices include, but are not limited to Vue, Meteor, React, Angular, jQuery, Ember, Polymer, Aurelia, Bootstrap, Material UI and Semantic UI
  14. Data Visualization. An analytic application needs plotting software to produce publication quality figures in a variety of hard-copy formats and interactive environments across platforms. Using a data visualization software allows you can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. Choices include, but are not limited to Tableau, PowerBI, Matplotlib, d3, VX, react-timeseries-chart, Bokeh, seaborn, plotly, Kibana and Grafana.
  15. Data Science. Data science tools allow you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, and support for large, multi-dimensional arrays and matrices. Choices include, but are not limited to Python, R, SciPy, NumPy, Pandas, NetworkX, Numba, SymPy, Jupyter Notebook, Jupyter Labs.
  16. Machine Learning. Machine learning frameworks provide useful abstractions to reduce amounts of boilerplate code and speed up deep learning model development. ML frameworks are useful for building feed-forward networks, convolutional networks as well as recurrent neural networks. Choices include, but are not limited to Python, R, TensorFlow, Scikit-learn, PyTorch, Spark MLlib, Spark ML, Keras, CNTK, DyNet, Amazon Machine Learning, Caffe, Azure ML Studio, Apache MXNet and MLflow.

If you’re curious check out some of the product choices Uber made.

We need to begin the next era of enterprise software and start to build custom or packaged enterprise AI applications. Applications that serve the workers, not developers; have millennial UIs and use the oceans of data coming from both the Internet of People and the Internet of Things. Luckily many of the infrastructure building blocks are now here, so stop using those mud bricks.

Timothy Chou was was one of only six people to ever hold the President title at Oracle. He is now in his 12th year teaching cloud computing at Stanford and recently launched another book, Precision: Principals, Practices and Solutions for the Internet of Things. Invite Timothy to keynote your next event!

I’ve been wondering for a while what might be next for enterprise software. Whether a small private or large public company, where should you invest our time and money?

Maybe looking into the past can give us some guidance. Enterprise software has gone through three distinct eras. In the 1st era, infrastructure software companies emerged like Microsoft and Oracle, which focused on programmers. Software developers used Microsoft Visual Basic and the Oracle database to build custom workflow applications for the enterprise throughout the 90s. By the late 90s the 2nd era of enterprise software began with the creation of packaged on-premises enterprise workflow application. Companies emerged including PeopleSoft, Siebel, SAP and Oracle. These applications focused on automating key workflows like order-to-cash, purchase-to-pay or hire-to-fire. Enterprises didn’t need to hire programmers to develop these workflow applications, they only needed to buy them, implement and manage them. The 3rd era began in the 2000s with the delivery of packaged workflow applications as a cloud service. Examples abound including Salesforce, Workday, Blackbaud and ServiceNow. This 3rd era eliminated the need for the enterprise to hire operations people to manage the applications and has accelerated the adoption of packaged enterprise workflow applications. While you could still hire programmers to write a CRM application, and operations people to manage it, why would you?

Let’s now switch our attention to analytics, which is not focused on automating a process, but instead on learning from the data to discover deeper insights, make predictions, or generate recommendations. Analytics has been populated with companies specializing in the management of the data (e.g. MongoDB, Teradata, Splunk, Cloudera, Snowflake, Azure SQL, Google Big Query, Amazon RedShift ); companies dedicated to providing tools for developers or business analysts (e.g., SAS, Tableau, Qlik and Pivotal) as well as software for data engineers including formerly public companies such as Mulesoft (acquired by Salesforce) and Informatica (acquired by Permira).

Furthermore, thanks to the innovations in the consumer Internet e.g., Facebook facial recognition, Google Translate, Amazon Alexa, there are now 100s of open source software and cloud services available which provide a wide array of AI and analytic infrastructure software building blocks. For those interested in geeking out, here is a brief introduction. Some of this technology will be dramatically lower cost. Consider today for about $1000 I can get 1,000 servers for 48 hours to go thru a training cycle to build a machine learning model.

I’m going to use the label AI to refer to the entire spectrum of analytic infrastructure technology, and also because it sounds cooler. Today we are largely in the 1st era. The software industry is providing AI infrastructure software and requiring the enterprise to hire the programmers, ML experts to build the application as well as dev ops people to manage the deployment. This is nearly the same as the 1st era of enterprise workflow software.

If we’re to follow the same sequence as workflow applications we need to move beyond the 1st era focused on developers and start building enterprise AI applications.

So what is an enterprise AI application?

Enterprise AI applications serve the worker not the software developer or business analysts. The worker might be an fraud detection specialist, a pediatric cardiologist or a construction site manager.

Enterprise AI applications have millennial UIs and are built for mobile devices, augmented reality and voice interaction.

Enterprise AI applications use historical data. Most enterprise workflow applications eliminate data once the workflow or the transaction completes.

Enterprise AI applications use lots of data. Jeff Dean has taught us with more data and more compute we can achieve near linear accuracy improvements.

Enterprise AI applications use many heterogeneous data sources inside and outside the enterprise to discover deeper insights, make predictions, or generate recommendations and learn from experience.

A good example of a consumer AI application is Google Search. It’s an application focused on the worker, not the developer, with a millennial UI and uses many heterogeneous data sources. Open the hood and you’ll see a ton of infrastructure software technology inside. So what are the challenges of building enterprise AI applications?

  1. The nice thing about transactional or workflow applications is the processes they automate are well defined, and follow some standards. Thus, there is a finite universe of these apps. Enterprise AI applications will be much more diverse and serve workers as different as the service specialist for a combine-harvester, a radiologist or the manager of an off shore oil drilling rig.
  2. The application development teams will be staffed differently. Teams will have a range of expertise including business analysts, domain specialists, data scientists, data engineers, devops specialist and programmers. With such a wide array of cloud-based software even programming will look different.
  3. Finally the development of these analytic applications will require a different methodology than was used to build workflow application. In workflow applications we can judge whether the software worked correctly or not. In enterprise AI applications we’ll have to learn the definition of a ROC curve and determine what level of false-positives and false negatives we’re willing to tolerate.

Some companies are emerging to serve the developer including Teradata and C3 as well as the compute & storage cloud service providers, Microsoft, Google and Amazon. While there is plenty of room for creating custom enterprise AI applications, the true beginning of the next era will be the emergence of packaged AI applications. There are beginning to be some examples. Visier, founded by John Schwartz, the former CEO of Business Objects, has built a packaged applications focused on the HR worker. Yotascale has chosen to focus on the IT worker who is managing complex cloud infrastructure. Welline built a packaged enterprise AI application for the petro-technical engineers in the oil & gas industry using the Maanaplatform. Lecida, founded by some of my former Stanford students, is delivering a collaborative intelligence application for workers who manage industrial (construction, pharma, chemical, utility..) machines. They are using AI technology to make machines smart enough to “talk” with human experts, when they need to. Those models are built in less than 48 hours using a ton of software technology.

In order for data to be the new oil, we need to begin the next era and start building custom or packaged enterprise AI applications. These applications serve the worker not the software developer or business analysts. The worker might be a reliability engineer, a pediatric endocrinologist or a building manager. Enterprise AI applications will have millennial UIs built for mobile devices, augmented reality and voice. And these applications will use the oceans of data coming from both the Internet of People and the Internet of Things to discover deeper insights, make predictions, or generate recommendations. We need to move beyond infrastructure to applications.

By Timothy Chou

One of the joys of teaching at Stanford is the quality of the students. A few years ago, I met Dr. Anthony Chang, who was coming back to school to earn a master’s degree in bioinformatics after having already earned his MBA, MD and MPH. It took him 3 1/2 years to complete, as he was still on-call as chief of pediatric cardiology at Children’s Hospital of Orange County, didn’t know how to program, and as a life-long bachelor had decided to adopt two children under the age of two.

Among his many accomplishments is starting the AIMed conference, which as the name implies focuses on AI in medicine. It’s held annually at the Ritz-Carlton Laguna Nigel in mid-December. Anthony attracts an amazing group of doctors who can both talk about pediatric endocrinology and graph databases. Since the conference is held near Christmas I often call Anthony “The Tree” and all the guest speakers are the ornaments. This year, I was asked to speak about the future of AI in medicine.

But, before we talk about the future let’s talk about the past. I was struck by one of the doctors talking about an $80M EMR application implementation. Having experience implementing enterprise ERP applications I was amazed at the number. It turns out this is not the high water mark with examples extending to north of $1B. Seriously?

Can an EMR application be the foundation for the future of AI in medicine? They are largely based on software from the 80s. If you were to think of cars it’s like trying to build an autonomous car using technology from a Model T parts bin. Furthermore, these applications were architected to serve billing applications, not patients. As a result there is no way to deliver personalized healthcare. After all, why should your bill look different than mine? And finally rather than being designed to collect and learn from exabytes of global data from healthcare machines they are built to archive notes from a set of isolated doctors who spend valuable time as typist. Maybe you should spend $10M to feed a billing application, but not $100M.

The future of AI in medicine depends on data. The more data, the more accuracy. Where is that data? Not in the EMR. It’s in the healthcare machines: the MRI, ultrasound, CT, immunoanalyzer, X-Ray, blood analysis, mass spectrometer, cytometer, and gene sequencer. Unfortunately the world of medicine lives in a disconnected state. My informal survey suggests that less than 10% of the healthcare machines in a hospital are connected. For those in computing, it looks like the 1990s when we had NetWare, Windows, Unix, and AS/400 machines that couldn’t talk to each other — until the Internet.

It turns out in 1994 when the Internet reached 1,000,000 connected machines the first generation of Internet companies like NetScape and eBay took off. And as the number of machines connected grew we ended up with even more innovations. Who could imagine NetFlix, Amazon, Google and Lyft before the Internet?

It turns out if you connected all the healthcare machines in all the children’s hospitals in the world we’d get to 500,000 machines, very close to the 1,000,000 machines that transformed the Internet. What would this enable? To begin with, we could get rid of CD-ROMs and the US Mail as the mechanism for doctors sharing data across the country. The Chexnet pneumonia digital assistant was developed with only 420 X-rays, what if they had 4,200,000 images. But, I’m sure this is just scratching the surface of what will be possible.

It’s clear the world of medicine where we pour knowledge into an individual’s head and let them, their machines and their patients operate in isolation is at an end. The challenges of connecting healthcare machines, collecting data and learning from that data are immense, but the benefit might actually change the world and it could cost a lot less than $100M.

By Aaron Carroll, MD

“Americans argue over insurance while Singaporeans keep perfecting the delivery of care.” says Dr. Aaron Carroll.

The following originally appeared on TheUpshot (copyright 2019, The New York Times Company).

Singapore’s health care system is sometimes held up as an example of excellence, and as a possible model for what could come next in the United States.

When we published the results of an Upshot tournament on which country had the world’s best health system, Singapore was eliminated in the first round, largely because most of the experts had a hard time believing much of what the nation seems to achieve.

It does achieve a lot. Americans have spent the last decade arguing loudly about whether and how to provide insurance to a relatively small percentage of people who don’t have it. Singapore is way past that. It’s perfecting how to deliver care to people, focusing on quality, efficiency and cost.

Americans may be able to learn a thing or two from Singaporeans, as I discovered in a recent visit to study the health system, although there are also reasons that comparisons between the nations aren’t apt.

Singapore is an island city-state of around 5.8 millionAt 279 square miles, it’s smaller than Indianapolis, the city where I live, and is without rural or remote areas. Everyone lives close to doctors and hospitals.

Another big difference between Singapore and the United States lies in social determinants of health. Citizens there have much less poverty than one might see in other developed countries.

The tax system is progressive. The bottom 20 percent of Singaporeans in income pay less than 10 percent of all taxes and receive more than a quarter of all benefits. The richest 20 percent pay more than half of all taxes and receive only 12 percent of the benefits.

Everyone lives in comparable school systems, and the government heavily subsidizes housing. Rates of smoking, alcoholism and drug abuse are relatively low. So are rates of obesity.

All of this predisposes the country to better health and accompanying lower health spending. Achieving comparable goals in the United States would probably require large investments in social programs, and there doesn’t appear to be much of an appetite for that.

There’s also a big caveat to Singapore’s success. It has a significant and officially recognized guest worker program of noncitizens. About 1.4 million foreigners work in Singapore, most in low-skilled, low-paying jobs. Such jobs come with some protections, and are often better than what might be available in workers’ home countries, but these workers are also vulnerable to abuse.

Guest workers are not eligible for the same benefits (including access to the public health system beyond emergency services) that citizens or permanent residents are, and they aren’t counted in any metrics of success or health. Clearly this saves money and also clouds the ability to use data to evaluate outcomes.

The government’s health care philosophy is laid out clearly in five objectives.

In the United States, conservatives may be pleased that one objective stresses personal responsibility and cautions against reliance on either welfare or medical insurance. Another notes the importance of the private market and competition to improve services and increase efficiency.

Liberal-leaning Americans might be impressed that one objective is universal basic care and that another goal is cost containment by the government, especially when the market fails to keep costs low enough.

Singapore appreciates the relative strengths and limits of the public and private sectors in health. Often in the United States, we think that one or the other can do it all. That’s not necessarily the case.

Dr. Jeremy Lim, a partner in Oliver Wyman’s Asia health care consulting practice based in Singapore and the author of one of the seminal books on its health care system, said, “Singaporeans recognize that resources are finite and that not every medicine or device can be funded out of the public purse.”

He added that a high trust in the government “enables acceptance that the government has worked the sums and determined that some medicines and devices are not cost-effective and hence not available to citizens at subsidized prices.”

In the end, the government holds the cards. It decides where and when the private sector can operate. In the United States, the opposite often seems true. The private sector is the default system, and the public sector comes into play only when the private sector doesn’t want to.

In Singapore, the government strictly regulates what technology is available in the country and where. It makes decisions as to what drugs and devices are covered in public facilities. It sets the prices and determines what subsidies are available.

“There is careful scrutiny of the ‘latest and greatest’ technologies and a healthy skepticism of manufacturer claims,” Dr. Lim said. “It may be at the forefront of medical science in many areas, but the diffusion of the advancements to the entire population may take a while.”

Government control also applies to public health initiatives. Officials began to worry about diabetes, so they acted. School lunches have been improved. Regulations have been passed to make meals on government properties and at government events healthier.

In the United States, the American Academy of Pediatrics and the American Heart Association recently called on policymakers to impose taxes and advertising limits on the soda industry. But that’s merely guidance; there’s no power behind it.

In Singapore, campaigns have encouraged drinking water, and healthier food choice labels have been mandated. The country, with control over its food importation, even got beverage manufacturers to agree to reduce sugar content in drinks to a maximum of 12 percent by 2020.

Singapore gets a lot of attention because of the way it pays for its health care system. What’s less noticed is its delivery system.

Primary care, which is mostly at low cost, is provided mostly by the private sector. About 80 percent of Singaporeans get such care from about 1,700 general practitioners. The rest use a system of 18 polyclinics run by the government.

As care becomes more complicated — and therefore more expensive — more people turn to the polyclinics. About 45 percent of those who have chronic conditions use polyclinics, for example.

The polyclinics are a marvel of efficiency. They have been designed to process as many patients as quickly as possible. The government encourages citizens to use their online app to schedule appointments, see wait times and pay their bills.

Even so, a major complaint is the wait time. Doctors carry a heavy workload, seeing upward of 60 patients a day. There’s also a lack of continuity. Patients at polyclinics don’t get to choose their physicians. They see whoever is working that day.

Care is cheap, however. A visit for a citizen costs 8 Singapore dollars for the clinic fees, a little under $6 U.S. Seeing a private physician can cost three times as much (still cheap in American terms).

For hospitalizations, the public vs. private share is flipped. Only about 20 percent of people choose a private hospital for care. The other 80 percent choose to use public hospitals, which are — again — heavily subsidized. People can choose levels of service there (from A to C, as described in an earlier Upshot article), and most choose a “B” level.

About half of all care provided in private hospitals is to noncitizens of Singapore. Even for citizens who choose private hospitals, as care gets more expensive, they move to the public system when they can.

So Singapore isn’t really a more “private” system. It’s just privately funded. In effect, it’s the opposite of what we have in the United States. We have a largely publicly financed private delivery system. Singapore has a largely privately financed public delivery system.

There’s also more granular control of the delivery system. In 1997, there were about 60,000 ambulance calls, but about half of those were not for actual emergencies. What did Singapore do? It declared that while ambulance services for emergencies would remain free, those who called for nonemergencies would be charged the equivalent of $185.

Of course, this might cause the public to be afraid to call for real emergencies. But the policy was introduced with intensive public education and messaging. And Singaporeans have identifier numbers that are consistent across health centers and types of care.

“The electronic health records are all connected, and data are shared between them,” said Dr. Marcus Ong, the emergency medical services director. “When patients are attended to for an emergency, records can be quickly accessed, and many nonemergencies can be then cleared with accurate information.

“By 2010, there were more than 120,000 calls for emergency services, and very few were for nonemergencies.

Singapore made big early health leaps, relatively inexpensively, in infant mortality and increased life expectancy. It did so in part through “better vaccinations, better sanitation, good public schools, public campaigns against tobacco” and good prenatal care, said Dr. Wong Tien Hua, the immediate past president of the Singapore Medical Association.

But in recent years, as in the United States, costs have started to rise much more quickly with greater use of modern technological medicine. The population is also aging rapidly. It’s unlikely that the country’s spending on health care will approach that of the United States (18 percent of G.D.P.), but the days of spending significantly less than the global average of 10 percent are probably numbered.

Medical officials are also worried that the problems of the rest of the world are catching up to them. They’re worried that diabetes is on the rise. They’re worried that fee-for-service payments are unsustainable. They’re worried hospitals are learning how to game the system to make more money.

But they’re also aware of the possible endgame. One told me, “Nobody wants to go down the United States route.”

Perhaps most important, the health care system in Singapore seems more geared toward raising up all its citizens than on achieving excellence in a few high-profile areas.

Without major commitments to spending, we in the United States aren’t likely to see major changes to social determinants of health or housing. We also aren’t going to shrink the size of our system or get everyone to move to big cities.

It turns out that Singapore’s system really is quite remarkable. It also turns out that it’s most likely not reproducible. That may be our loss.

Aaron E. Carroll, MD, MS a healthcare speaker, professor of pediatrics at Indiana University School of Medicine who blogs on health research and policy at The Incidental Economist and makes videos at Healthcare Triage. 

Innovation is a relentless pursuit for every successful organization, cutting across geographies and industries. And many are driving disruptions by promoting intrepreneur teams too.

Doug Hall argues this is a flawed approach, benefitting only a select few. Innovation needs to operate in a broader realm—one that encompasses all and promises a level playing field.

For 35 years, I consulted with global corporations on the selection and development of intrepreneurial thinking within corporations.

The theory was that if multi-nationals could get just a few people to think entrepreneurially a transformation in business results would result. Today, I believe that this was a mistake. This article explains why I feel that igniting small teams of intrepreneurs does not work. It also explains what I am observing does work.

About five years ago, I embarked on a study of the root causes of innovation success and failure. My data included those from over 25,000 innovations and quantitative measurement of the innovation skills and attitudes of hundreds of thousands of managers.

The data taught me that true return on investment from small teams of intrepreneurs was devastatingly small. Specifically, ideas developed by intrepreneurs lost 50 percent of their value when they interfaced with the corporate ‘system’ during development. A Fortune 20 corporation found nearly identical results when they analyzed their history.

Initially, intrepreneurship teams generate the perception of success. This is because they spend the majority of their time on the innovation as opposed to ‘the system’.  However, the journey from entrepreneurial idea through reality to market reality in a corporation requires capital and technical and human resources. As a result, eventually, the success vaporizes as an open or passive- aggressive civil war occurs when the new innovation needs to access resources within the existing organization.

If the intrepreneurs  are real entrepreneurs, they fight to prevent compromise. In these cases, at around 18 months, they end up spending more time trying to convince those in the system to support their breakthrough innovation than they spend on the idea itself. As momentum stalls, those within the system point out the waste of time and energy, and slowly but surely the leadership gives up on the venture and/or the team members give up and quit the company.

The failure of intrepreneur-driven projects is regrettable. However, my bigger reason for no longer supporting this approach has to do with basic human rights. It is not fair to enable some within the organization the freedom to think and create cool stuff outside the system, using advanced tools and methods, while at the same time shackling others with working within a bureaucratic prison.

Further, when the CEO stands up and celebrates a team that innovates outside the system, what message do you think it sends to the others? Those in the existing system have ideas too. They can make the ideas happen too. They could do amazing things as well… ‘if ’ they had that same training, tools, and freedom as the innovators.

Today, instead of enabling a few, I recommend igniting an intrepreneurship/innovation mind-set across everyone, everywhere, every day. Enabling everyone to be an intrepreneur was initially very difficult. However, new discoveries in adult education and data-driven digital tools and methods are making it increasingly easy.

The potential impact of enabling everyone to think smarter, faster, and more creatively has a proven pedigree in factories. Since the end of World War II, corporations in Japan and across the world have embraced production as a system of interconnected parts. They have learned that the key to success is enabling everyone to produce quality.

The founding architect of today’s quality programs was Dr W Edwards Deming. He famously taught that 94 percent of manufacturing quality problems are due to poor systems, 6 percent are the result of poor workers.

He further taught that the most effective systems were intrinsic. They involved training employees in the new way of thinking and giving them the tools and methods they need to work smarter.

William Hopper, co-author of The Puritan Gift, explained to me that enabling employees was the key to the Japanese Miracle: “In 1961 when Sumitomo Electric Industries won the Deming Prize (Japan’s manufacturing quality prize) they did it in a totally different way. Before their victory the winner’s quality efforts were driven by experts. Sumitomo enabled all of the workers to be a part of the process of quality.”

A newspaper story in Japan on Sumitomo’s success told how they enabled frontline employees: “Foremen were trained to prepare control charts and became fully able to use them themselves. They then changed working methods so that younger workers could make products at a high yield. Before this quality-control method was introduced, only some highly trained technicians, with special skill and experience, could make products at a high yield. Afterwards, foremen were able to change the production method so that high yield was attained.

Sumitomo spent several million yen to introduce the new quality-control procedures, but the profit from them was in the hundreds of millions. The experience of Sumitomo is that if all employees cooperate to improve the method of manufacturing the product, a very high standard can be achieved.”         Shared by Kenneth Hopper

Sumitomo’s win changed how Japanese companies approached quality. Instead of a few experts, they engaged everyone. The result was an exponential growth in success for those companies that embraced the new approach. Toyota manufacturing executives have confirmed to me that yes, the foundation of their success is enabling everyone to build quality as opposed to having a few ‘experts’.

Today, we are observing that the Deming approach works equally well with innovation. We are seeing increases in innovation speed of up to 6x and decreases in risk of

up to 80 percent. The bottom line impacts are impressive. Today, over $17 billion (USD) in innovations are in active development by corporations who have been part of our innovation culture-building pilots.

We still have much to learn about how to create a culture of innovation. What is clear, however, is that there are three steps for getting started:

01  Creating a culture of innovation begins when it is among the leader’s top three personal priorities. The most impactful thing a leader can do to create a culture of innovation is to become personally involved in leading the transformation. Culture change cannot

be delegated to another executive. Resistance to culture change should be expected. However, when the leader leads the change then employees take it far more seriously. This was also true with the quality movement. When the leader led, a culture of quality developed. When it was not important enough for the leader to lead quality, it did not sustain. The most effective implementation of an innovation culture is when the ‘leader’ is the CEO. However, we have also seen great success when leaders of business units,

divisions, or departments make a culture of innovation their responsibility.

02 The second step in transformation is education and tools. Toyota continually improves its quality by providing employees with never-ending education and tools that amplify their effectiveness. So too, education is needed as it is rare that employees are rarely taught reliable and reproducible methods of problem-solving or, as we say ‘finding, filtering and fast tracking big ideas’. In addition, new digital innovation tools amplify employee innovation success. For example, PDSA

(Plan, Do, Study, Act) Project Management increases speed, rapid research tools decrease risk, and artificial intelligence tools make it easy for everyone to create and communicate big ideas.

03 The third step is celebrating ‘both’ big and small innovations. Kevin Cahill, Executive Director of the Deming Institute and grandson of Dr Deming,

taught me that the key to success with culture change is enabling employees to get started immediately by encouraging immediate action. To do this, we teach leaders to enable all employees to use the training and tools they have been provided to work smarter right now. As Kevin said in an interview for my latest book Driving Eureka!, “I believe that every single person in every single organization has some sphere of influence; they can impact something. If they have some understanding of these ideas, some understanding of what the limitations of the system that they’re currently operating in are, then they can make a difference in what they’re doing.” Innovation of big ideas for new or improved products/services is important. However, our research finds that equally important is embracing tens of thousands of smaller ideas for working smarter.

Changing a culture is very hard. However, early data is showing that investing the leader’s time in creating a culture of innovation is well worth the effort. In addition to the increases in speed and decreases in risk mentioned earlier, we are seeing that when employees are engaged, innovations grow by 28 percent during development instead of declining by 50 percent. This means a net +78 percent difference in the value of innovations that reach the market.

As a healthcare professional, you might agree that something has to change in our healthcare system. While we could debate public policy, insurance carriers and the law, I’ll make a case there are significant steps in technology, which could fundamentally change healthcare in the US and the rest of the world.

These days Artificial Intelligence/AI has become part of popular press articles. You might have seen the singer Common talking about AI on a recent Microsoft TV ad. As consumers we experience the power of Siri, Alexa or Google to recognize speech and if you’re on Facebook, we’ve how well facial recognition can work. Recently an AI powered application beat a human at the game of Go, which many thought would take another ten years.

In the world of medicine we are seeing similar advances in the potential for AI to provide the precision diagnostic capability of the world’s best ophthalmologist. One of my former students, Dr. Anthony Chang has taken his considerable knowledge and network and launched the AIMed conference series because he believes it’s time to bring the world of healthcare closer to AI, Big Data and Cloud Computing.

Each Siemens, GE, Beckman, Abbott, Illumine, Phillips machine speaks it’s own language. If you’ve been in computing a long time you’ll recognize we used to be this way.

But anyone in the world of machine learning and AI will tell you the more data we can learn from will result in more accurate analytics. So where is all of this data?

Most hospitals have over 1000 machines: MRI scanners, CAT scanners, gene sequencers, drug infusion pumps, blood analyzers, etc. Unfortunately these machines are all balkanized. Each Siemens, GE, Beckman, Abbott, Illumine, Phillips machine speaks it’s own language. If you’ve been in computing a long time you’ll recognize we used to be this way. Our AS/400, Unix, Mainframe, client-server applications existed in their own world, able to only communicate with their own tribe.

There are today about 500 hospitals around the world and on average there are 1,000 machines in each hospital. What if we could connect them all?

In the 1990s this all began to change. The creation of the Internet based on TCP/IP changed everything because finally we could have different kinds of machines talk to each other. In the mid 1990s when the Internet had roughly 1,000,000 machines connected companies like Netscape, eBay and Amazon were created. At 10,000 machines no one would have cared, but at 1,000,000 it mattered. Fast-forward to today with billions of machines connected our experiences with buying books, making travel reservations or moving money is dramatically different.

Now consider the small world of pediatric hospitals. There are today about 500 hospitals around the world and on average there are 1,000 machines in each hospital. What if we could connect them all? Maybe like the consumer Internet with 500,000 machines connected healthcare could become dramatically different. Sadly, most of the attention today is on EMR/HER applications, where doctors spend their evenings and weekends typing data into these ancient pre-Internet applications. But the massive amounts of data, which will power AI applications is not there. The data is in the machines: the blood analyzers, gene sequencers, CAT scanners, and ultrasounds. Maybe if we could just start by connecting the machines in all the pediatric hospitals we could make a difference in the lives of the 2.2B children in the world.

Timothy Chou was was one of only six people to ever hold the President title at Oracle. He is now in his 12th year teaching cloud computing at Stanford and recently launched another book, Precision: Principals, Practices and Solutions for the Internet of ThingsInvite Timothy to keynote your next event!

*For a complete list of speakers on this topic "contact us”.