Metropolis-18: The New Tower of Babel (public domain — Flicker)

‘Control’ means different things in different data science communities

In his recent book Human Compatible, leading AI researcher and computer scientist Stuart Russell describes the dangers of “overly intelligent” algorithms, and proposes a solution based on incorporating uncertainty into the algorithms’ “understanding” of human preferences. Russell is puzzled how statisticians, control theory researchers, and operations researchers haven’t thought of this:

“In all the work on utility maximization, and loss function, the reward function, and the loss function are known perfectly. How could this be? How could the AI community (and the control theory, operations research, and statistics communities) have such a huge blind spot for so long, even while…

Source: Pixabay (Free for commercial use, no attribution required)

Machine learning (ML) algorithms are being used to generate predictions in every corner of our decision-making life. Methods range from “simple” algorithms such as trees, forests, naive Bayes, linear and logistic regression models, and nearest-neighbor methods, through improvements such as boosting, bagging, regularization, and ensembling, to computationally-intensive, blackbox deep learning algorithms.

The new fashion of “apply deep learning to everything” has resulted in breakthroughs as well as in alarming disasters. Is this due to the volatility of deep learning algorithms? …

Galit Shmueli

Distinguished Professor of Business Analytics, National Tsing Hua University

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store