While artificial intelligence for self-driving cars and virtual assistants gets a lot of attention, the past few months have seen a wave of AI advancements focused on the tasks of analysts. “Analyst” is a ubiquitous role, found in every industry that touches data. Analysts use the measurements we take of our world and try to answer relevant business questions — a role that is critical to gaining value from data. Yet AI appears to be increasingly encroaching on the role of analysts.
This month Apple acquired Lattice.io for $200 million to automatically turn unstructured data into structured data — a task that is normally in the realm of analysts. And a U.S.-based startup called Lapetus is looking to displace insurance risk analysts with artificial intelligence that the company says is more accurate at predicting life expectancy than traditional methods. These advancements and others like them beg the question: Will AI replace analysts?
The short answer is no. But maybe not for the reasons you suspect.
More human than human
When most people think of artificial intelligence, they think of a coldly rational decision maker, lacking in emotion — like Data, the fictional android from Star Trek. That may have been an accurate description in the early days of AI, when programmers wrote custom rule engines to respond to certain scenarios. But as AI and machine learning have progressed, algorithms have become incredibly good at pattern recognition, and have started to act more biologically — more like instincts based on experience than decisions based on logic.
In Daniel Kahneman’s Thinking, Fast and Slow, the author describes the two systems of the human brain. System 1 is automatic, and System 2 is the conscious, logical system. If the automatic system in the brain is the one producing our emotions — our automatic response to things that might harm us (fear) or that might bring us good things (joy) — then AI is becoming much more like the emotional system than the rational one. In fact, recent advances in reinforcement learning, where an AI receives positive and negative signals in response to actions and develops its reactions over time, are already remarkably similar to the way our emotional responses are programmed by past experiences. And given that there are systematic errors in the automatic function of our thinking, as identified by Kahneman in his Nobel Prize-winning research, AI may surpass us in the quality of a “gut reaction” before it improves upon our logic.
With this shift toward System 1, some of the tasks that were once considered uniquely human are now within the reach of advanced AI. Algorithms are so adept at pattern recognition that AI can judge emotions and has learned to spot fear and joy in human faces. AI has written poetry and composed music, as in Ji-Sung Kim’s deepjazz project. All of this shows that the line separating AI from human intelligence isn’t quite where most of us thought it was.