A Journalist’s Guide to Demystify A.I.

Better to light a candle than curse the algorithm

(Image: Gerd Altmann, Pixabay)

We’re increasingly surrounded by smart devices and systems, with built-in intelligence for anticipating our needs. If you’ve talked with a voice-recognition system when calling a customer-support number, had your face scanned when entering the U.S. from abroad, or asked Alexa or Siri to tell you the weather forecast, you’ve interacted with artificial intelligence, or A.I.

While we enjoy the convenience of smart devices and systems, they also raise concerns by users as A.I. emerges in more and more of our life’s activities, at work, home, school, commerce, and leisure. Even newsrooms are a target for A.I. And they’re filtering down to mobile devices we constantly carry around. Just ask Siri.

We would like to assume that the smarts in these systems are expertly made and thoroughly tested and validated before being released for everyday use. But can we make that assumption? As journalists, we’re asked to report on developments affecting people’s day-to-day lives. Yet as A.I. technology forges ahead, journalists are often left on the outside looking in with everyone else. It’s time — actually, past time — for reporters to ask developers of smart systems what’s in that A.I. we’re turning over to more of our lives.

About A.I. and journalists

The term artificial intelligence covers a number of different technologies, including natural language processing, computer vision, and voice recognition to sense and understand signals and stimuli from the surrounding world. While some of these advances have been in the works for decades, their extension into our day-to-day lives results from the widespread availability of enormous computing power, proliferation of mobile devices, and emergence of cloud computing to store vast amounts of data.

But perhaps the most dramatic advances in A.I. are in machine learning. Because of machine learning, smart systems are becoming even smarter as they incorporate the data they encounter.

Systems now not only remember the data they encounter, they learn and grow more knowledgeable from those data, and in many cases more efficiently and predictably than humans. One can make the case that the conveniences provided by machine learning are offset by the uncertainties they create over the role of humans in an increasingly smart-technology world.

And we’re likely to see even more of A.I.-enabled systems in our lives. While there’s no solid count of the numbers of smart systems and devices out there, a recent compilation of venture capital investments gives us a pretty good clue of the trend. This tally, by the market intelligence company GlobalData, shows a sharp uptick in venture deals and funds in start-up companies developing A.I. systems in just the past year, making A.I. one of the hot venture capital targets. (Data for the first quarter of 2019 show a slowing of A.I. venture investments from last year’s torrid pace.)

It’s not that journalists are oblivious to all this, but the focus has been rather narrow, looking for example at potential effects of A.I. on jobs. And I plead guilty to writing that kind of story as well. In fact, another recent study shows companies are putting in A.I. systems that are good for business, not necessarily to eliminate jobs. This study, conducted by market data company Statista for Consumer Technology Association, shows the top business applications of A.I. are better detection of security threats and improving customer service, functions directly related to an enterprise’s survival.

A better story about A.I. is to examine what smart systems do, how they do it, and if they do it well. In other words, light a candle inside the black box, rather than complain about the darkness. To light this candle, it helps knowing what questions to ask, and we’ll examine those questions here.

About algorithms

The guts of machine learning are algorithms with the logic behind the computer code that handles the information by the smart device or system. Algorithms are a set of rules and processes for performing a task, usually represented in a mathematical formula or logic. They can be simple, like the formula to calculate interest payments on a loan, or more complex to account for a multitude of factors and conditions.

In machine learning, algorithms take on a special quality: they adjust their rules, processes, and calculations as more data are encountered. Many algorithms used in machine learning are derived from Bayesian statistics, named for Thomas Bayes, an 18th century British clergyman and philosopher, whose theorem led to calculations that predict the probability an outcome occurring based on other related probabilities. And in these calculations, the more data used to calculate the outcome, the more reliable the prediction. Nate Silver, who founded FiveThirtyEight, helped popularize Bayesian statistics in election polling and to predict the outcome of sporting events, even as the games are played.

For machines to learn with these algorithms, they need data and lots of it. The more data encountered and absorbed by the algorithm, and the more diverse the conditions expressed in those data, the more refined the predictions should be and the more confidently they can be applied to larger populations. Developers of machine learning systems write their algorithms, then train the routines often with data from large-scale databases. After put into use, these algorithms continue to learn from the data they encounter, and become more refined in their outcomes.

While the idea of algorithms is rather straightforward, they can, as noted earlier, become quite complex. Yet, business executives, policy makers, and average citizens are using machine learning algorithms to help make more decisions than ever, which depend on the accuracy and reliability of their calculations. We can trust the mathematicians and engineers who write algorithms, or insist on safeguards to check on their accuracy and reliability.

Why do we need to check on algorithms? We have bitter experience from the 2007–2008 financial meltdown. Among the causes of the worst economic downturn since the 1930s were complex financial instruments, a type of derivative called collateralized debt obligations that pool together other assets, such as mortgage loans and bonds, and repackage them as separate investments.

Many collateralized debt obligations were quite detailed and complicated, requiring mathematical formulas and computer logic expressed in algorithms to precisely determine outcomes and manage risk. Before the crash, many Wall Street investment firms hired trained physicists and mathematicians called “quants,” to write these algorithms, and the complexity of the algorithms often masked the underlying riskiness of the mortgage loans making up the package being marketed to investors.

As the underlying subprime loans in the package went bad, the collateralized debt obligations built on this shaky foundation crashed with them. And because of their complexity, investment banks like Bear Stearns and Lehman Brothers that owned these securities and also went under, couldn’t explain why. (I led a feature about scientists hired as quants in the careers section of Science magazine in 2008.)

Lighting the candle

(Image: Science Translational Medicine)

How can reporters, particularly those without a background in math and statistics — and that covers a lot of today’s journalists — find out what’s in these machine learning algorithms? By asking the right questions: about the problem the algorithm aims to solve, the process used to write the algorithm, and its track record in practice. If you think about it, reporters routinely ask the same kinds of questions of politicians, business executives, entertainers, and sports stars. We now need to ask them as well of A.I. systems developers.

Start with basic facts about the algorithm:

1. What problem is the A.I. system solving?

2. What is the algorithm calculating that helps solve that problem?

3. What are the data sources for training the algorithm?

4. What is the algorithm learning as it encounters data?

The next group of questions asks about the quality and reliability of the algorithms. This can be a tricky area, of course, since few people other than data scientists have the technical background to evaluate an algorithm’s math or logic.

We’re helped here by an article published in the journal Science Translational Medicine in December 2018, Big data and black-box medical algorithms, by W. Nicholson Price, a law professor at University of Michigan. Nicholson offers a strategy to help users of machine learning algorithms in the medical field understand what’s in them.

Price proposes a series of questions for machine learning systems developers, to help medical systems regulators, such as the Food and Drug Administration, get their hands around artificial intelligence. (FDA in April 2019 announced its intention to create a regulatory framework for medical algorithms.) But Price’s strategy to understand the insides of medical black boxes can also be applied to other machine-learning algorithms, with questions such as:

5. Was the algorithm reviewed independently, or are the code and databases used to train the algorithm available for independent reviewers?

6. Was the algorithm run against data sets other than those used for training, to validate the initial findings?

7. What’s the algorithm’s track record with real-world data?

Example of questions used with a real algorithm

Here’s an example of these questions applied to a health care algorithm, which Science & Enterprise reported on in March 2019. A team of medical researchers and data scientists from Stanford University and Technion, Israel’s leading science and engineering institution, wrote a machine-learning algorithm to calculate a measure of a person’s immune system health, published in the journal Nature Medicine. Invoking the immune system is quickly becoming a key mechanism of new treatments for many disorders, particularly cancer. But these new therapies need a well-functioning immune system in the patient to work effectively.

The team led by Mark Davis at Stanford and Shai Shen-Orr at Technion wrote an algorithm to calculate what they call immune age. A person’s immune system gradually declines over time, but chronological age alone is at best a rough indicator if a treatment invoking the immune system is suitable. Davis, Shen-Orr, and colleagues therefore devised this algorithm for computing a person’s immune age, which they say provides a more sensitive and reliable measure of immune system health.

Let’s apply our 7 questions to this algorithm.

1. What problem is the A.I. system solving?
The system fills the need for a better measure of immune-system health to determine if treatments that invoke the immune system are suitable for a person.

2. What is the algorithm calculating that helps solve that problem?
The algorithm calculates an index called immune age, computed from a wide range of overall and immune-system health indicators.

3. What are the data sources for training the algorithm?
The researchers tracked 135 healthy individuals for 9 years, taking periodic measures of various immune-system indicators: genes expressed in whole blood samples, responses of cells to signaling enzymes called cytokines emitted by immune system cells, and characteristic traits of specific cell subsets in the body.

4. What is the algorithm learning as it encounters data?
The algorithm learns more about an individual’s immune system condition over time and changes its calculation of immune health as the person ages.

5. Was the algorithm reviewed independently, or are the code and databases used to train the algorithm available for independent reviewers?
The researchers make the data and source code available to independent researchers.

6. Was the algorithm run against data sets other than those used for training to validate the initial findings?
The team ran their algorithm against health data from a sample of 2,000 participants in the Framingham Heart Study, a landmark continuing survey of cardiovascular and overall health of more than 15,000 people in the city of Framingham, Massachusetts that began in 1948. The researchers say their immune age algorithm accurately predicted mortality rates in the cases they sampled.

7. What’s the algorithm’s track record with real-world data?
The Nature Medicine paper was published in March 2019, so as of April 2019 there’s been little chance for a real-world test. However, the immune age algorithm is more than an academic exercise. The company CytoReason in Tel Aviv, co-founded by Shen-Orr, licenses the intellectual property from Technion to simulate cell behavior for drug discovery and clinical trial planning. The company’s experience with the immune age algorithm could offer an answer to this question, as well as provide new data to refine the algorithm as more diverse populations are encountered.

Bonus questions: What if a company claims its algorithm is proprietary?

Companies offering services with machine-learning algorithms may claim their algorithms are trade secrets and cannot be disclosed. The immune age algorithm used here as an example may offer some ways around that claim. Here are further questions to ask:

Is the company’s algorithm based on academic research? If so, an earlier version may be published in a journal, like the Stanford-Technion paper in Nature Medicine. The company may add to or fine tune the calculations, but the published research can provide a good estimate of the latest version.

Is the company’s algorithm, or the research behind it, funded by the U.S. government? National Institutes of Health was one of the funders of the immune age algorithm. Funding announcements from federal agencies are public documents and searchable online. Here are links to databases of funded research from …
National Institutes of Health
National Science Foundation 
– Department of Defense medical research
– DARPA artificial intelligence projects

Is the algorithm patented, or has the company applied for a patent? If the algorithm is critical to the company’s success, then the company — or the academic institution from which they licensed the technology — likely applied for a patent to protect the intellectual property. A search through patent databases, like those provided by the U.S. Patent and Trademark Office, or third-party databases, can provide the patent text. Again, the patent may not provide all of the algorithm’s latest features, but it can offer a good idea of its workings.

Favorite

Leave a Comment