Timothy Chou

Yeah!
Timothy Chou Keynote Speakers

There are no shortage of research development teams (R&D) that built great — yet ultimately unsuccessful — products. And while any manufacturer — no matter the industry — is capable of doubling its revenues and quadrupling its margins by building digital service products, it’s a task that’s more easily said than done.

Why? Because the digital service product still has to be sold. And successfully doing so requires not only investment of time, effort and resources. It also requires a concerted effort to focus on five key components:

  • Pricing the product
  • Top-level marketing stories
  • Developing a sales team and determining how they will be compensated
  • Successfully managing business operations
  • Determining how to pay for everything

PRICING THE PRODUCT

Let’s start with pricing the product. While it can generate significant amount of debate within an organization, I propose you start with something simple. Price the digital service product as a monthly percentage of the purchase price of the machine. In the world of software, this often ranges from 2-6% of the purchase price of the product. Consider starting with 0.5-1% per month. So if your microplate reader is priced at $75,000, you should price the digital service product at $375-$750 per month. Of course, you’ll have a volume discount matrix, which will offer customers who spend more money a bigger discount.

TOP-LEVEL MARKETING STORIES

Next, you’ll need to define the marketing message. Since your No. 1 competition is the status quo, you should start with “selling the not.” Plenty of people both internal and external to your company think service is break-fix. A few months ago, I had breakfast with the CEO of a company that builds machines for the semiconductor industry. I asked him how many machines he had in the field. He responded by saying around 10,000 to 20,0000. The precision of his answer caught my attention immediately. I went on to ask him how much service revenue he generates, to which he responded with the universal sign of a goose egg.

I then asked “Why zero?” He replied “No one wants to pay for service.” Of course the reason no one pays for service is he defined it as break-fix support. Anyone who has just bought a $250,000 machine would assume it would work, so why pay anything more?

Well, service is not break-fix. Service is information, personal and relevant; information on how to maintain or optimize the performance, availability, security and changes of a machine.

Then there’s the importance of top-level marketing stories. In the modern era, it’s critical for you to find a way to tell your story as a story. Every night when you watch your favorite television show, you’ll see some essential aspects of storytelling. Stories have characters. Stories are set in a particular place and time. And all stories fall into three categories: man vs. man; man vs. nature; man vs. himself. Now check out your marketing collateral. How many stories do you see?

DEVELOPING A SALES TEAM

Once you’ve clearly articulated your digital service product story, your next major step should be to hire and organize your sales team. Make sure it’s a dedicated team. Selling service is not the same as selling new product features. Think of it this way… You may have noticed the Mercedes sales person and the Mercedes service manager isn’t the same person. Your digital service salespeople will be more farmer than hunter, since the goal is to monetize your installed base. Given these are all customers you should know, this is not the same as trying to prospect for new names.

No sales team works without a compensation package. Given you’re moving from selling one time to selling a recurring service, you’ll need to establish a compensation plan which both incentivizes the initial sale of the service, but also more importantly the renewal of this valuable recurring revenue stream. In software companies, it’s not uncommon to have customer success managers whose sole focus is continually satisfying the customer.

MANAGING BUSINESS OPERATIONS

Next you’ll need to have business operations create new contracts and ordering documents. You could borrow some of the terms and conditions from the software-as-a-service industry, but my most important recommendation is you avoid the creation of service level agreements with associated point-by-point penalties depending on how you did or did not perform the service. Instead, create a digital service product guarantee, which is all encompassing. In this guarantee, you should stipulate that, no matter the reason, the customer is entitled to a rebate of 20% in the month the claim is made. This will simplify revenue recognition, which will make your CFO happy, reduce legal expenses and expedite the contract signing (which should, in turn, make your sales teams happy).

FUNDING

Finally, building and selling a new product line will not happen without investment. You’ll be challenged to re-allocate resources from your traditional business. If you’re looking for a tool to help think about how to fund this transformation check out Geoffrey Moore’s last book, Zone to Win. He talks about putting your annual budget into four major categories:

  • Performance
  • Productivity
  • Incubation
  • Transformation zones

The performance zone is money you spend to deliver material bookings, revenues, and contribution margins in this fiscal year. The productivity zone is money you spend to increase the efficiency and effectiveness of your R&D or sales organization. The money spent to deploy a new CRM application would fall into this category. Again, the time horizon is the current fiscal year. Most companies will have 100% of their budgets allocated to these two categories, which brings us to the last two categories. The incubation zone allocates funds to developing new business models or new products. The time horizon for these investments is 36-72 months. If you have not started a digital service product, then you’d allocate financial resources from the incubation bucket. Finally, the transformation zone is where you put the wood behind the arrow and fund not only the development of the digital service product, but all of the sales and marketing that’s required for it to be successful. Moore says pick only one project from the incubation zone. The CEO must sponsor it. Furthermore you’d expect to deliver 10% of the current company revenue in a 36-month horizon. Given the market opportunity is at least two times your current product revenues, this is certainly possible for any digital service product.

While not easy, the next major step for any company that makes combine harvesters, front loaders, industrial printers, water purification equipment, agitators or ultrasound machines is to build and sell digital service products. Digital service products deliver information on how to maintain or optimize the performance, availability and security of the machine. These are the fundamental components of the last major step, which is to deliver the product-as-a-service.

We’re already seeing the digital service product revolution occurring in certain industries. When will it start in yours?

How do we double our revenues and quadruple our margins using software?

Based on an executive workshop held in Minneapolis, MN in 2019

If you’re the CEO or board member of a company that manufactures any healthcare, construction, agriculture, power generation, pharmaceutical or industrial machine you’ve probably heard about IoT, edge, AI, 5G and cloud computing. But why should you care? Why should your company care?

While finding ways to use technology to save money is always good, the bigger driver is using software to increase revenue. I’ll make the case as the manufacturer of construction, packaging, oil, gas, healthcare or transportation machines, you can double your revenues and quadruple your margins by building and selling digital service products. Furthermore, you’ll create a barrier that your competition will find difficult to cross.

Software Defined Machine

Next-generation machines are increasingly powered by software. Porsche’s latest Panamera has 100 million lines of code (a measure of the amount of software) up from only two million lines in the previous generation. Tesla owners have come to expect new features delivered through software updates to their vehicles. A software-defined automobile is the first car that will end its life with more features than it began. But it’s not only cars, healthcare machines are also becoming more software defined. A drug-infusion pump may have more than 200,000 lines of code, and an MRI scanner more than 7,000,000. A modern boom lift — commonly used on construction sites — has 40 sensors and three million lines of code, and a farm’s combine harvester has over five million. Of course, we can debate if this is a good measure of software, but I think you get the point: machines are increasingly software defined.

So, if machines are becoming more software defined, then the business models that applied to the world of software may also apply to the world of machines. In the rest of this article we’ll cover three business models.

Business Model 1: Product and Disconnected Digital Services

Early on in the software industry we created products and sold them on a CD; if you wanted the next product, you’d have to buy the next CD. As software products became more complex, companies like Oracle and SAP moved to a business model where you bought the product (e.g., ERP or database) together with a service contract. That service contract was priced at roughly 2% of the purchase price of the product per month. Over time, this became the largest and most profitable component of many enterprise software product companies. In the year before Oracle bought Sun Microsystems (when they were still a pure software business), they had revenues of approximately $15B, only $3B of which was product revenue, the other $12B (over 80%) was high-margin, recurring-service revenue.

But what is service? Is service answering the phone nicely from Bangalore? Is it flipping burgers at McDonald’s? The simple answer is no. Service is the delivery of information that is personal and relevant to you. That could be the hotel concierge telling you where to get the best Szechwan Chinese food in walking distance, or your doctor telling you that, based on your genome and lifestyle, you should be on Lipitor. Service is personal and relevant information.

I’ve heard many executives of companies who make machines say, “Our customers won’t pay for service.” Well, of course, if you think service is break-fix, then the customer clearly thinks you should build a reliable product. Remember Oracle’s service revenue? In 2004, the Oracle Support organization studied the 100 million requests for services from Oracle support and over 99.9% of those requests were answered with known information. Aggregating information for thousands of different uses of the software, even in a disconnected state, represented huge value over the knowledge of a single person in a single location. Service is not break-fix. Service is personal and relevant information about how to maintain or optimize the availability, performance or security of the product. All delivered in time and on time.

Business Model 2: Product and Connected Digital Services

The next major step in software business models was to connect to the computers that ran the software. This enabled even more personal and more relevant information on how to maintain or optimize the performance, availability and security of the software product. These digital services are designed to assist IT workers in maintaining or optimizing the product (e.g., database, middleware, financial application). For example, knowing the current patch level of the software enables the service to recommend only those relevant security patches be applied. Traditional software companies charge between 2 and 3% of the product price per month for a connected digital service. The advantage of this model is the ability to target the installed base of enterprises, which have purchased the product in the traditional Model 1.

Now let’s move to the world of machines. If a company knows both the model number and current configuration of the machine, as well as the time-series data coming from hundreds of sensors, then the digital service can be even more personal and relevant and allows the company to provide precision assistants for workers who maintain or optimize the performance, availability and security of the healthcare, agriculture, construction, transportation or water purification machine.

Furthermore, assume you build this digital service product and price it at just 1% of the purchase price of the product per month. If your company sells a machine for $200K and you had an installed base of 4,000 connected machines, you could generate $100M of high-margin, annual recurring revenue. And since digital service margins can be much bigger than product margins, companies who have moved to just 50/50 models (50% service, and 50% product) have seen their margins quadruple.

While this business model has been aggressively deployed in high tech, we are still in the early days with machine manufacturers. There are some early leaders. Companies like GE and a major elevator supplier derive 50% of their revenue from service. Voltas, a large HVAC manufacturer, is an 80/20 company — meaning they derive 20% of their revenue from services. In the healthcare area Abbott has introduced a digital service product called AlinIQ and Ortho Clinical is selling Ortho Care as an annual subscription service. While some of this is lower margin, human-powered, disconnected services the value of a recurring revenue stream is not lost on the early leaders.

Business Model 3: Product-as-a-Service

Once you can tell the worker how to maintain or optimize the security, availability or performance of the product, the next step is to simply take over that responsibility as the builder of the product. Over the last fifteen years we’ve seen the rise of Software-as-a-Service companies (SaaS) such as Salesforce.com, Workday and Blackbaud, which all deliver their products as a service. In the past seven years this has also happened with server hardware and storage products as companies like Amazon, Microsoft and Google provide compute and storage products as a service.

All of these new product-as-a-service companies have also changed the pricing to a per-transaction, per-seat, per-instance, per-month or per-year model. We’re likely to see the same with agricultural, construction, transportation and healthcare machines. Again there are some early examples Kaeser compressor is delivering air-as-service and AGCO is selling sugar cane harvesters by the bushel harvested. In the consumer world we’re all familiar with are Uber and Lyft, which provide transportation machines as a service — priced per ride. Of course, the most expensive operating cost of the ride is the human labor, so like those of us in high-tech software and hardware products, they are looking at replacing the human labor with automation.

So why should you care about IoT, edge, 5G, AI and cloud computing? Not because they are cool technologies, but because they will enable you to double your topline revenues and quadruple your margins with high quality recurring revenue. And by the way, all the while building a widening gap with your competition.

For more detail see the five keys to building digital service products and selling them.

By Timothy Chou

It’s no secret that over the past 4 years there have been dramatic improvements in the usage of AI technology to recognize imagestranslate text, win the game of Go or talk to us in the kitchen. Whether it’s Google Translate, Facebook facial recognition or Amazon’s Alexa these innovations have largely been focused on the consumer.

On the enterprise side progress has been much slower. We’ve all been focused on building data lakes (whatever that is), and trying to hire data scientists and machine learning experts. While this is fine, we need to get started building enterprise AI applicationsEnterprise AI applications serve the worker not the software developer or business analysts. The worker might be an fraud detection specialist, a pediatric cardiologist or a construction site manager. Enterprise AI applications leverage the amazing amount of software that has been developed for the consumer world. These applications have millennial UIs and are built for mobile devices, augmented reality and voice interaction. Enterprise AI applications use many heterogeneous data sources inside and outside the enterprise to discover deeper insights, make predictions, or generate recommendations. A good example from the consumer world is Google Search. It’s an application focused on the worker, not the developer, with a millennial UI and uses many heterogeneous data sources. Open up the hood and you’ll see a ton of software technology inside.

With the advent of cloud computing, and continued development of open source software, building application software in the past 5 years has changed dramatically. It might be as dramatic as moving from ancient mud brick to modern prefab construction. As you’ll see we have a ton of software technology that’s become available. Whether you’re an enterprise building a custom application, or a new venture building a packaged application, you’ll need to do three things.

  1. Define the use-case. Define the application. Who is the worker? Is it an HR professional, reliability engineer or a pediatric cardiologist?
  2. The Internet is the platform. Choose wisely. We’ll discuss this more in depth in this article.
  3. Hire the right team. The teams will have a range of expertise including business analysts, domain experts, data scientists, data engineers, devops specialist and programmers.

For enterprises that are considering building scalable, enterprise-grade AI applications it’s never been a better time — there are hundreds of choices, many inspired by innovations in the consumer Internet. To understand the breadth I’ve arbitrarily created sixteen different categories, with a brief description and some example products. We’ll mix both open source software, which can run on any compute and storage cloud service along with managed cloud services.

  1. Compute & Storage Cloud Services provide compute and storage resources on demand, managed by the provider of the service. While you could build your application using on-premises compute & storage, it would both increase the number of technology decisions and raise the overall upfront cost both in capital equipment and people to manage the resources. Furthermore the ability to put a 1000 servers to work for 48 hours for less than a $1000 is an economic model unachievable in the on-premises world. Choices include but are not limited to AWS, Google Cloud, Microsoft Azure, Rackspace, IBM Cloud, AliCloud.
  2. Container Orchestration. VMWare pioneered the ability to create virtual hardware machines, but VMs are heavyweight and non-portable. Modern AI applications are using containers based on OS-level virtualization rather than hardware virtualization. They are easier to build than VMs, and because they are decoupled from the underlying infrastructure and from the host file system, they are portable across clouds and OS distributions. Container orchestration orchestrates computing, networking, and storage infrastructure on behalf of user workloads. Choices include but are not limited to Kubernetes, Mesos, Swarm, Rancher and Nomad.
  3. Batch Data Processing. As data set sizes get larger, an application needs a way to efficiently process large datasets. Instead of using one big computer to process and store the data, modern batch data processing software allows clustering commodity hardware together to analyze large data sets in parallel. Choices include but are not limited to Spark, Databricks, Cloudera, Hortonworks, AWS EMR and MapR.
  4. Stream Data Processing. An AI application, which is designed to interact with near real-time data, will need streaming data processing software. Streaming data processing software has three key capabilities: publish and subscribe to streams of records; store streams of records in a fault-tolerant durable way and finally the ability to process streams of records as they occur. Choices include but are not limited to Spark Streaming, Storm, Flink, Apex, Samza, IBM Streams.
  5. Software Provisioning. From traditional bare metal to serverless, automating the provisioning of any infrastructure is the first step in automating the operational life cycle of your application. Software provisioning frameworks are designed to provision the latest cloud platforms, virtualized hosts and hypervisors, network devices and bare-metal servers. Software provisioning provides the connecting tool in any of your process pipelines. Choices include but are not limited to Ansible, Salt, Puppet, Chef, Terraform, Troposphere, AWS CloudFormation, Docker Suite, Serverless and Vagrant.
  6. IT Data Collect. Historically, many IT applications were built on SQL databases. Any analytic application will need the ability to collect data from a variety of SQL data sources. Choices include but are not limited to Teradata, Postgres, MongoDB, Microsoft SQL Server and Oracle.
  7. OT Data Collect. For analytic applications involving sensor data, there will be the need to collect and process time-series data. Products include traditional historians such as AspenTech InfoPlus.21, OSISoft’s PI, Schneider’s Wonderware and traditional database technologies extended for time-series such as Oracle. For newer applications product choices include but are not limited to InfluxDB.Cassandra, PostgreSQL, TimescaleDB, OpenTSDB.
  8. Message Broker. A message broker is a program that translates a message from a messaging protocol of the sender, to a messaging protocol of the receiver. This means that when you have a lot of messages coming from hundreds of thousands to millions of end points, you’ll need a message broker to create a centralized store/processor for these messages. Choices include but are not limited to Kafka, Kinesis, RabbitMQ, Celery, Redis and MQTT.
  9. Data Pipeline Orchestation. Data engineers create data pipelines to orchestrate the movement, transformation, validation, and loading of data, from source to final destination. Data pipeline orchestration software allows you to identify the collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. Choices include but are not limited to Airflow, Luigi, Oozie, Conductor and Nifi.
  10. Performance Monitoring. Performance of any application, including analytic applications requires real time performance monitoring to determine bottlenecks and ultimately be able to predict performance. Choices include but are not limited to Datadog, AWS Cloudwatch, Prometheus, New Relic and Yotascale.
  11. CI/CD. Continuous integration (CI) and continuous delivery (CD) software enables a set of operating principles, and collection of practices that enable analytic application development teams to deliver code changes more frequently and reliably. The implementation is also known as the CI/CD pipeline and is one of the best practices for devops teams to implement. Choices include but are not limited to Jenkins, Circle CI, Bamboo, Semaphore CI and Travis.
  12. Backend Framework. Backend frameworks consist of languages and tools used in server-side programming in an analytic application development environment. A backend framework is designed to speed the development of the application by providing a higher-level programming interface to design data models, handle web requests, and other commonly required features. Choices include but are not limited to Flask, Django, Pyramid, Dropwizard, Elixir and Rails.
  13. Front-end Frameworks. Applications need a user interface. There are numerous front end frameworks used for building user interfaces. These front end frameworks as a base in the development of single-page or mobile applications. Choices include, but are not limited to Vue, Meteor, React, Angular, jQuery, Ember, Polymer, Aurelia, Bootstrap, Material UI and Semantic UI
  14. Data Visualization. An analytic application needs plotting software to produce publication quality figures in a variety of hard-copy formats and interactive environments across platforms. Using a data visualization software allows you can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. Choices include, but are not limited to Tableau, PowerBI, Matplotlib, d3, VX, react-timeseries-chart, Bokeh, seaborn, plotly, Kibana and Grafana.
  15. Data Science. Data science tools allow you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, and support for large, multi-dimensional arrays and matrices. Choices include, but are not limited to Python, R, SciPy, NumPy, Pandas, NetworkX, Numba, SymPy, Jupyter Notebook, Jupyter Labs.
  16. Machine Learning. Machine learning frameworks provide useful abstractions to reduce amounts of boilerplate code and speed up deep learning model development. ML frameworks are useful for building feed-forward networks, convolutional networks as well as recurrent neural networks. Choices include, but are not limited to Python, R, TensorFlow, Scikit-learn, PyTorch, Spark MLlib, Spark ML, Keras, CNTK, DyNet, Amazon Machine Learning, Caffe, Azure ML Studio, Apache MXNet and MLflow.

If you’re curious check out some of the product choices Uber made.

We need to begin the next era of enterprise software and start to build custom or packaged enterprise AI applications. Applications that serve the workers, not developers; have millennial UIs and use the oceans of data coming from both the Internet of People and the Internet of Things. Luckily many of the infrastructure building blocks are now here, so stop using those mud bricks.

Timothy Chou was was one of only six people to ever hold the President title at Oracle. He is now in his 12th year teaching cloud computing at Stanford and recently launched another book, Precision: Principals, Practices and Solutions for the Internet of Things. Invite Timothy to keynote your next event!

I’ve been wondering for a while what might be next for enterprise software. Whether a small private or large public company, where should you invest our time and money?

Maybe looking into the past can give us some guidance. Enterprise software has gone through three distinct eras. In the 1st era, infrastructure software companies emerged like Microsoft and Oracle, which focused on programmers. Software developers used Microsoft Visual Basic and the Oracle database to build custom workflow applications for the enterprise throughout the 90s. By the late 90s the 2nd era of enterprise software began with the creation of packaged on-premises enterprise workflow application. Companies emerged including PeopleSoft, Siebel, SAP and Oracle. These applications focused on automating key workflows like order-to-cash, purchase-to-pay or hire-to-fire. Enterprises didn’t need to hire programmers to develop these workflow applications, they only needed to buy them, implement and manage them. The 3rd era began in the 2000s with the delivery of packaged workflow applications as a cloud service. Examples abound including Salesforce, Workday, Blackbaud and ServiceNow. This 3rd era eliminated the need for the enterprise to hire operations people to manage the applications and has accelerated the adoption of packaged enterprise workflow applications. While you could still hire programmers to write a CRM application, and operations people to manage it, why would you?

Let’s now switch our attention to analytics, which is not focused on automating a process, but instead on learning from the data to discover deeper insights, make predictions, or generate recommendations. Analytics has been populated with companies specializing in the management of the data (e.g. MongoDB, Teradata, Splunk, Cloudera, Snowflake, Azure SQL, Google Big Query, Amazon RedShift ); companies dedicated to providing tools for developers or business analysts (e.g., SAS, Tableau, Qlik and Pivotal) as well as software for data engineers including formerly public companies such as Mulesoft (acquired by Salesforce) and Informatica (acquired by Permira).

Furthermore, thanks to the innovations in the consumer Internet e.g., Facebook facial recognition, Google Translate, Amazon Alexa, there are now 100s of open source software and cloud services available which provide a wide array of AI and analytic infrastructure software building blocks. For those interested in geeking out, here is a brief introduction. Some of this technology will be dramatically lower cost. Consider today for about $1000 I can get 1,000 servers for 48 hours to go thru a training cycle to build a machine learning model.

I’m going to use the label AI to refer to the entire spectrum of analytic infrastructure technology, and also because it sounds cooler. Today we are largely in the 1st era. The software industry is providing AI infrastructure software and requiring the enterprise to hire the programmers, ML experts to build the application as well as dev ops people to manage the deployment. This is nearly the same as the 1st era of enterprise workflow software.

If we’re to follow the same sequence as workflow applications we need to move beyond the 1st era focused on developers and start building enterprise AI applications.

So what is an enterprise AI application?

Enterprise AI applications serve the worker not the software developer or business analysts. The worker might be an fraud detection specialist, a pediatric cardiologist or a construction site manager.

Enterprise AI applications have millennial UIs and are built for mobile devices, augmented reality and voice interaction.

Enterprise AI applications use historical data. Most enterprise workflow applications eliminate data once the workflow or the transaction completes.

Enterprise AI applications use lots of data. Jeff Dean has taught us with more data and more compute we can achieve near linear accuracy improvements.

Enterprise AI applications use many heterogeneous data sources inside and outside the enterprise to discover deeper insights, make predictions, or generate recommendations and learn from experience.

A good example of a consumer AI application is Google Search. It’s an application focused on the worker, not the developer, with a millennial UI and uses many heterogeneous data sources. Open the hood and you’ll see a ton of infrastructure software technology inside. So what are the challenges of building enterprise AI applications?

  1. The nice thing about transactional or workflow applications is the processes they automate are well defined, and follow some standards. Thus, there is a finite universe of these apps. Enterprise AI applications will be much more diverse and serve workers as different as the service specialist for a combine-harvester, a radiologist or the manager of an off shore oil drilling rig.
  2. The application development teams will be staffed differently. Teams will have a range of expertise including business analysts, domain specialists, data scientists, data engineers, devops specialist and programmers. With such a wide array of cloud-based software even programming will look different.
  3. Finally the development of these analytic applications will require a different methodology than was used to build workflow application. In workflow applications we can judge whether the software worked correctly or not. In enterprise AI applications we’ll have to learn the definition of a ROC curve and determine what level of false-positives and false negatives we’re willing to tolerate.

Some companies are emerging to serve the developer including Teradata and C3 as well as the compute & storage cloud service providers, Microsoft, Google and Amazon. While there is plenty of room for creating custom enterprise AI applications, the true beginning of the next era will be the emergence of packaged AI applications. There are beginning to be some examples. Visier, founded by John Schwartz, the former CEO of Business Objects, has built a packaged applications focused on the HR worker. Yotascale has chosen to focus on the IT worker who is managing complex cloud infrastructure. Welline built a packaged enterprise AI application for the petro-technical engineers in the oil & gas industry using the Maanaplatform. Lecida, founded by some of my former Stanford students, is delivering a collaborative intelligence application for workers who manage industrial (construction, pharma, chemical, utility..) machines. They are using AI technology to make machines smart enough to “talk” with human experts, when they need to. Those models are built in less than 48 hours using a ton of software technology.

In order for data to be the new oil, we need to begin the next era and start building custom or packaged enterprise AI applications. These applications serve the worker not the software developer or business analysts. The worker might be a reliability engineer, a pediatric endocrinologist or a building manager. Enterprise AI applications will have millennial UIs built for mobile devices, augmented reality and voice. And these applications will use the oceans of data coming from both the Internet of People and the Internet of Things to discover deeper insights, make predictions, or generate recommendations. We need to move beyond infrastructure to applications.

By Timothy Chou

One of the joys of teaching at Stanford is the quality of the students. A few years ago, I met Dr. Anthony Chang, who was coming back to school to earn a master’s degree in bioinformatics after having already earned his MBA, MD and MPH. It took him 3 1/2 years to complete, as he was still on-call as chief of pediatric cardiology at Children’s Hospital of Orange County, didn’t know how to program, and as a life-long bachelor had decided to adopt two children under the age of two.

Among his many accomplishments is starting the AIMed conference, which as the name implies focuses on AI in medicine. It’s held annually at the Ritz-Carlton Laguna Nigel in mid-December. Anthony attracts an amazing group of doctors who can both talk about pediatric endocrinology and graph databases. Since the conference is held near Christmas I often call Anthony “The Tree” and all the guest speakers are the ornaments. This year, I was asked to speak about the future of AI in medicine.

But, before we talk about the future let’s talk about the past. I was struck by one of the doctors talking about an $80M EMR application implementation. Having experience implementing enterprise ERP applications I was amazed at the number. It turns out this is not the high water mark with examples extending to north of $1B. Seriously?

Can an EMR application be the foundation for the future of AI in medicine? They are largely based on software from the 80s. If you were to think of cars it’s like trying to build an autonomous car using technology from a Model T parts bin. Furthermore, these applications were architected to serve billing applications, not patients. As a result there is no way to deliver personalized healthcare. After all, why should your bill look different than mine? And finally rather than being designed to collect and learn from exabytes of global data from healthcare machines they are built to archive notes from a set of isolated doctors who spend valuable time as typist. Maybe you should spend $10M to feed a billing application, but not $100M.

The future of AI in medicine depends on data. The more data, the more accuracy. Where is that data? Not in the EMR. It’s in the healthcare machines: the MRI, ultrasound, CT, immunoanalyzer, X-Ray, blood analysis, mass spectrometer, cytometer, and gene sequencer. Unfortunately the world of medicine lives in a disconnected state. My informal survey suggests that less than 10% of the healthcare machines in a hospital are connected. For those in computing, it looks like the 1990s when we had NetWare, Windows, Unix, and AS/400 machines that couldn’t talk to each other — until the Internet.

It turns out in 1994 when the Internet reached 1,000,000 connected machines the first generation of Internet companies like NetScape and eBay took off. And as the number of machines connected grew we ended up with even more innovations. Who could imagine NetFlix, Amazon, Google and Lyft before the Internet?

It turns out if you connected all the healthcare machines in all the children’s hospitals in the world we’d get to 500,000 machines, very close to the 1,000,000 machines that transformed the Internet. What would this enable? To begin with, we could get rid of CD-ROMs and the US Mail as the mechanism for doctors sharing data across the country. The Chexnet pneumonia digital assistant was developed with only 420 X-rays, what if they had 4,200,000 images. But, I’m sure this is just scratching the surface of what will be possible.

It’s clear the world of medicine where we pour knowledge into an individual’s head and let them, their machines and their patients operate in isolation is at an end. The challenges of connecting healthcare machines, collecting data and learning from that data are immense, but the benefit might actually change the world and it could cost a lot less than $100M.

As a healthcare professional, you might agree that something has to change in our healthcare system. While we could debate public policy, insurance carriers and the law, I’ll make a case there are significant steps in technology, which could fundamentally change healthcare in the US and the rest of the world.

These days Artificial Intelligence/AI has become part of popular press articles. You might have seen the singer Common talking about AI on a recent Microsoft TV ad. As consumers we experience the power of Siri, Alexa or Google to recognize speech and if you’re on Facebook, we’ve how well facial recognition can work. Recently an AI powered application beat a human at the game of Go, which many thought would take another ten years.

In the world of medicine we are seeing similar advances in the potential for AI to provide the precision diagnostic capability of the world’s best ophthalmologist. One of my former students, Dr. Anthony Chang has taken his considerable knowledge and network and launched the AIMed conference series because he believes it’s time to bring the world of healthcare closer to AI, Big Data and Cloud Computing.

Each Siemens, GE, Beckman, Abbott, Illumine, Phillips machine speaks it’s own language. If you’ve been in computing a long time you’ll recognize we used to be this way.

But anyone in the world of machine learning and AI will tell you the more data we can learn from will result in more accurate analytics. So where is all of this data?

Most hospitals have over 1000 machines: MRI scanners, CAT scanners, gene sequencers, drug infusion pumps, blood analyzers, etc. Unfortunately these machines are all balkanized. Each Siemens, GE, Beckman, Abbott, Illumine, Phillips machine speaks it’s own language. If you’ve been in computing a long time you’ll recognize we used to be this way. Our AS/400, Unix, Mainframe, client-server applications existed in their own world, able to only communicate with their own tribe.

There are today about 500 hospitals around the world and on average there are 1,000 machines in each hospital. What if we could connect them all?

In the 1990s this all began to change. The creation of the Internet based on TCP/IP changed everything because finally we could have different kinds of machines talk to each other. In the mid 1990s when the Internet had roughly 1,000,000 machines connected companies like Netscape, eBay and Amazon were created. At 10,000 machines no one would have cared, but at 1,000,000 it mattered. Fast-forward to today with billions of machines connected our experiences with buying books, making travel reservations or moving money is dramatically different.

Now consider the small world of pediatric hospitals. There are today about 500 hospitals around the world and on average there are 1,000 machines in each hospital. What if we could connect them all? Maybe like the consumer Internet with 500,000 machines connected healthcare could become dramatically different. Sadly, most of the attention today is on EMR/HER applications, where doctors spend their evenings and weekends typing data into these ancient pre-Internet applications. But the massive amounts of data, which will power AI applications is not there. The data is in the machines: the blood analyzers, gene sequencers, CAT scanners, and ultrasounds. Maybe if we could just start by connecting the machines in all the pediatric hospitals we could make a difference in the lives of the 2.2B children in the world.

Timothy Chou was was one of only six people to ever hold the President title at Oracle. He is now in his 12th year teaching cloud computing at Stanford and recently launched another book, Precision: Principals, Practices and Solutions for the Internet of ThingsInvite Timothy to keynote your next event!

I was invited to speak at a Generation Investment Management event in San Francisco recently. In particular, I was asked to talk about the organizational and leadership challenges in realizing the commercial and sustainability benefits of data science, analytics and artificial intelligence. What follows is a summary of my comments.

Most people have heard that “data is the new oil,” or, according to a T-shirt I saw recently, the new bacon. And we all know there is going to be more and more data. In a discussion with the CEO of Mercedes-Benz Research & Development last week, he talked about how modern automobiles are capable of generating terabytes per day. He thinks it will be cars that speed the deployment of 5G networks, not YouTube.

But if data is the new oil, it’s still just crude oil, and needs refining. Analytics and data science have changed financial services (read The Quants), retail (check out Amazon) and media (log into Facebook, Twitter or Netflix). All of these industries have invested in building analytic applications, but perhaps the best example is Google Search, an analytic application delivered as a cloud service for all consumers. If you look under the hood, there are amazing software and hardware technologies that refine the crude data and deliver useful information in a simple-to-use application.

So while financial services, retail and media have been transformed by data, the rest of the global economy has been unaffected. The World Economic Forum says that two-thirds of the global GDP is power, transportation, agriculture, construction, healthcare, oil, textiles, shrimp farming, food, beverage, chemicals, mines, and water — the planet’s fundamental infrastructure. Data might be the new oil, but it’s had minimal impact on these industries and on the planet.

There are at least three challenges to address.

1. Analytic Application Cloud Services. We need more analytic application cloud services, not more platform technology. Today, building an analytic application requires at least 16 categories of platform technology and over 100 product choices. And while that’s daunting enough, you’ll also need to hire at least four different types of expertise and organize them to build an application for a business worker. While plenty of platforms and cloud services exist to build workflow applications, why would you do that when you can purchase a CRM application cloud service? The industry needs to build analytic application cloud services as it has so successfully done with workflow applications.

2. Leadership. A few years ago, I was asked to speak to the senior executives of General Electric. It was a dinnertime talk so I thought I should come up with a simple topic. I decided to name the talk: “Why is Software Not Hardware?” I started by saying that I don’t know much about building MRI scanners, wind turbines or jet engines, but I do know it’s not the same as building software. The executive leadership and boards of power, agriculture, construction, healthcare, oil, textiles, shrimp farming, food, beverage, chemicals, mines, and water companies need to understand software and realize that it’s not something they hand to the CIO.

3. Policy. I teach a class on cloud computing at Stanford University; it’s listed in the computer science department, but I use it to invite the rest of the campus to learn about technology. I was particularly pleased to see a number of students from the law school over the past several years. Why? Because today, laws and public policy are being set by people who have no clue about technology. Just watch the Facebook hearing and the questions asked of Mark Zuckerberg. If data science, analytics and artificial intelligence transform the planet, we’re also going to need thoughtful and well-advised public policy across the globe.

If data truly is the new oil, it’s going to take technology, leadership and wise public policy to refine it.

Over the past few years I’ve been on a steep learning curve regarding the construction industry, having started out knowing nothing. Four years ago, Helge Jacobsen, VP at United Rentals—the world’s largest construction machine rental company—invited me to be a part of an all-day strategy session. I was the only non-United Rentals person there. In the first hour I kept hearing people say, 19-foot scissor this and 19-foot scissor that. But as I couldn’t imagine a 19-foot pair of scissors, I finally raised my hand and asked, “Why would anyone want to rent a 19-inch scissor?” They all laughed and told me they were talking about a 19-foot scissor lift. Later that year, the United Rentals folks sent me a present—an actual 19-inch scissor (pictured above).

As a student of the construction industry, I’ve learned it’s one of the few industries where productivity has not been improving. According to an analysis by McKinsey, no industry has done worse. Since 1995, the manufacturing industry has nearly doubled productivity, while construction has remained flat.

So what can be done about this?

For starters, we’ve seen the increases in productivity resulting from the power of connecting people on the Internet. So, in construction we need to start with connecting the construction machines. Once the machines are connected we can start to collect the data. Of course, many already know that data is the “new oil,” or, as I saw on a t-shirt last week, the “new bacon.” But, construction machine companies, construction rental companies, and the companies that build bridges or offshore oil drilling rigs will need to find a way to share their crude isolated IT and OT digital data so they can use AI/ML technologies to turn it into refined information. Information that could optimize their decisions today and ultimately predict the future.

Next, electrification and autonomous operation are already beginning to reshape the transportation industry. It’s clear that the environmental benefits of electrification and the cost and safety benefits of autonomous operation will make a big difference in construction projects; this autonomous excavator is a great example.

In the manufacturing industry, repetition and automation have been the principles, which have increased productivity, so many in the industry are starting to think about how these principles might also change the construction industry. Architects and general contractors may re-think how buildings are constructed, and along with that, what tools and machines will be required when the units of construction become much larger.

Improving productivity isn’t the only priority for the construction industry; another is construction site safety. For instance, out on the job site we now have the ability to ensure that only someone certified with the proper training can start a machine, like one of those 19-foot scissor lifts. Likewise, we can monitor the site environment around a number of parameters, such as temperature, humidity, water leakage, atmospheric pressure, noise, vibration and air particulates. And the implications of augmented reality and hands-free, voice-enabled technology to enhance safety are just beginning to be explored. It’s clear a connected job site of the future will not look like it does today.

And finally, the commercial buildings, hospitals or solar farms we build will all be much smarter. We’ll use sensors to ensure the quality of the air and water and everyone will operate much more precisely in how they consume power. In addition, everything from LNG (liquefied natural gas) plants to office buildings will protect us from unintentional threats like fires, as well as the intentional threats we all face in the modern world.

We have a long road of innovation ahead of us.

I just returned from United Rentals’ 19th Total Control conference in Dallas where we did the unofficial launch of the new book, Precision Construction, the sequel to Precision: Principles, Practices and Solutions for the Internet of Things. If you’re interested in construction, this new book gives anyone who makes, rents or uses construction machines a glimpse of this new, software-defined world, utilizing the knowledge and experience of over 20 co-storytellers from the construction industry. Precision Construction will be available next month on Amazon as both a traditional book and on Kindle. You can also register hereto get the Kindle eBook version free for a limited time. Use coupon code: TCL.

Last week I did a dinnertime talk in Houston for my friends at Atomiton. We had executives from quite a few companies including Halliburton, McDermott, Southwest Energy, Bechtel, Schlumberger and Chevron. While this is not a transcript of the talk – hopefully this is what they heard. 

Enterprises who build or use construction, healthcare, oil, gas, energy, agriculture, water, textile, or industrial printing machines are starting to think about their edge computing strategy. As a CIO or CDO, what should you be thinking about? What should your edge strategy be?

Let me start with an observation that up until now, most of the software and hardware technology we’ve built has been for the Internet of People (IoP).  Whether that’s an eCommerce site or a CRM application, we fundamentally believe there is a person at the other end typing on a keyboard or scrolling thru their phone.

But, as I explain to my Stanford kids, People are not Things, and Things are not People. Why do I say that?

There will be way more things connected to the Internet than People. John Chambers is widely quoted, as saying there will be 500B things connected to the Internet. That’s nearly 10x the global population.

Things can be where people are not. Things can be in your stomach as a smart pill. Things can be a mile underground in a coal mine and Things can be in the middle of the Australian Outback. Things can be where people are not.

Things have more to say than People. The best we can do is type, move a mouse or scroll down a screen. Modern day wind turbines have 500 sensors on them. Things have much more to say.

Things can say something much more frequently. The best we can do is press on our touch screen or type on our laptops. A long wall shearer in the coal mining industry has roof top vibration sensors that run at 10,000 samples per second. That’s a lot faster than any of us can type.

And finally , we can debate this. Things can be programmed. People can’t.

So if Things are not People, then why would technology built for the Internet of People work for the Internet of Things?

The smart phone is the edge for the Internet of People. One of the reasons sensor technology has plummeted in price has been the rise of smart phones. Modern cell phones contain up to fourteen sensors including accelerometers, gyroscopes, magnetometers and thermometers. Some phones have a built-in barometer, which measures atmospheric pressure. It’s used to determine how high the phone is above sea level, which improves GPS accuracy. Samsung Galaxy pioneered the use of an air humidity sensor in their phones. That data is used to tell whether the user is in their comfort zone. Some phones, such as the Galaxy X5 have heart rate monitors. Finally, you might not be surprised to know a Sharp smart phone sold in Japan contains a radiation sensor. These sensors interface to a big computer with lots of storage and connect out to three different kinds of networks. This hardware is all driven by innovative software from Apple and Google, which built software development environments that have given us millions of apps.

So what’s the edge of the Internet of Things going to be? As someone who is responsible for your enterprise edge computing strategy I’d recommend you consider five major components.

Compute & Storage. It’s no real news, but you can now get high-powered computers with 1.4Ghz CPU, 1GB memory and 128GB of storage for less than $75. By the way, that’s the spec of the iPhone6. The days of PLCs and 8-bit microcontrollers are long past us. What are your edge compute & storage requirements? How much power is required? Do you need to run on batteries? Disposable? Rechargeable?

Communications. Your edge strategy should include a much wider range of connectivity options than just Wi-Fi and 4G.  In addition to Wi-Fi and 4G you may consider LoRa, SigFOX, NBIoT, Zigbee, 5G Cellular and even 60GHz wireless. How much data do you need to transmit, how frequently? How much power can you consume? And how far is your edge from the cloud? Remember Things can be where people are not.

Sensors. While your smart phone has a handful of sensors, the range of possible sensors is far greater. TE Connectivity’s catalog has 1632 different kinds of sensors. Sensors can have many different quality levels. Just consider the light sensor in the camera on your phone. Did you know you can purchase an 8 megapixel camera with a sensor that is 1/4 the size of another 8 megapixel camera? Obviously a larger image sensor has more surface area exposed to the available light, which will result in a better quality image. What will your sensor strategy be? How many sensors? Quality of the sensors?

Software. Windows, Android and IoS were all developed for the Internet of People. Remember Visual Basic was built to make it easy to interact with people so much of the focus has been on building better and more responsive people interfaces. But Things are not People. What software stack will you use? Will it enable you to interface with a wide variety of sensors and communication technologies? Does it provide a robust application developer environment?

Software Management. Just as the edge of the Internet of People, the edge of the Internet of Things will increasingly be driven by software. Consider that in 2016 the Porsche Panamera had just 2,000,000 lines of code. The 2017 Porsche has 100,000,000 lines of code. As you’ve already seen on your smart phone, software needs to be updated to improve performance, battery life and security. So what will your strategy be for the management of the availability, security and performance of your edge software? How will you implement identity and access management for your Things? It’s not likely that you can make them change the password every 90 days and add a special character.

So as an IT professional there is much to learn, but as we have seen the edge for the Internet of People (your phone) has transformed the consumer experience. The edge for the Internet of Things promises to transform enterprises including oil, gas, water, industrial printing, transportation, construction, healthcare, agriculture, textiles and energy. So get started on your edge strategy.

Timothy Chou has been lucky enough to have a career spanning academia, successful (and not so successful) startups and large corporations. He was one of only six people to ever hold the President title at Oracle.

*For a complete list of speakers on this topic "contact us”.