You may think-Is the world too disorganized for any technology to manage? Is technology uncovering that things are even more disorganized and unmanageable than first thought? Artificial intelligence, machine learning, and allied technologies may be underscoring a comprehension Albert Einstein had many decades ago: “The more I learn, the more I realize how much I don't know.”

Advertisment

When there is the necessity of employing the newest analytics in organizations and more, even the best technologies like predictive algorithms, artificial intelligence can fail to clarify, and even disclose, the complexity and interfaces that form events and trends. David Weinberger from Harvard University explains in his most recent book about how AI, big data, science, and the internet are all clarifying a fundamental truth: things are more complex and impetuous than we've permitted ourselves to observe.

"Our unstated contract with the universe has been if we work hard enough and think clearly enough, the universe will yield its secrets, for the universe is knowable, and thus at least somewhat pliable to our will," Weinberger writes in Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility. "But now that our tools, especially machine learning and the internet, are bringing home to us the immensity of the data and information around us, we're beginning to accept that the true complexity of the world far outstrips the laws and models we devise to explain it."

The insincerity is the systems created by us to make some sense of the world like machine learning and deep learning and these are only making more obscurity. To show this trend, Weinberger indicates to Deep Patient, an AI program developed by Mount Sinai Hospital in New York in 2015. Deep Patient was fed medical records of 700,000 patients as disorganized information, without any framework to categorize it or any guidelines to use the data. However, even with only three incomplete pieces to analyze, Deep Patient can identify the probability of patients developing some diseases more correctly than doctors.

Advertisment

The single catch behind the success of Deep Patient is that no one knows about why and how it arrives at its conclusions. "The number and complexity of contextual variables mean that Deep Patient simply cannot always explain its diagnoses as a conceptual model that its human keepers can understand," Weinberger says.

According to Weinberger, success with AI and automation originates from accepting and controlling results sent, and not trying to decode the explanations behind the data fed into these systems. If A/B testing demonstrates text on a website causes more traffic than a photo placement, go with it.

Weinberger summarizes four rules of the road that should direct our anticipations with regards to machine learning and deep learning applications:

Advertisment
  • Estimate the workflow of Artificial Intelligence systems, but let AI conclude how results are created. Today's deep learning models "are not created by humans, at least not directly. Humans choose the data and feed it in, humans head the system toward a goal, and humans can intercede to tune the weights and outcomes. But humans do not necessarily tell the machine what features to look for. Because the models deep learning may come up with are not based on the models we have constructed for ourselves, they can be opaque to us."
  • Discard the old models that outlined expectations. "Deep learning models are not generated premised on simplified principles, and there's no reason to think they are always going to produce them."
  • We shouldn't be expected to understand what motivates AI decisions. "Deep learning systems do not have to simplify the world to what humans can understand," Weinberger says. "Our old, simplified models were nothing more than the rough guess of a couple of pounds of brains trying to understand a realm in which everything is connected to, and influenced by, everything."
  • Finally, it's about the data. "Everything connected to everything means that machine learning's model can constantly change. Changes in machine learning models can occur simply by retraining them on new data. Indeed, some systems learn continuously."

Eventually, a question comes out of this, and it is whether to keep faith in AI output when injustice needs to be questioned or analyzed. As Weinberger's book fails to undertake the issues of human bias being prepared into AI algorithms head-on -- that is the main theme of whole books in and of themselves -- he indicates that human biases discover their way into results, and these demonstrate our own failings.  People search for explicability "to prevent AI from making our biased culture and systems even worse than they were before Artificial Intelligence. Keeping AI from repeating, amplifying and enforcing existing prejudices is a huge and hugely important challenge."

Despite growing automated decision-making process, there is still the requirement of critical thinking -- human critical thinking -- to run businesses and institutions. People should be able to ignore or question the result of AI systems, particularly when the process is obscure.  Right now every job, training programme and every course curriculum should have such critical skill.