Unbeknownst to many, artificial intelligence is not an entirely recent invention. The theoretical principles and groundwork for AI were laid by scientists some 60 to 70 years ago, and recent computational developments have manifested some of that promise. This allows generative programs to produce large amounts of text or elaborate images based on simple prompts.
These capabilities have led many business leaders to declare that AI is going to radically change everything. However, one expert believes that a more pragmatic view is needed to avoid falling into dystopian or alarming myths about AI.
“AI is spectacular, but it is not as revolutionary as one might imagine,” says Paul Hartley, PhD, CEO of Human Futures Studio (HFS), an independent research consultancy that combines anthropology, foresight strategy, behavioral economics, and design research to drive innovation.
This ‘revolution’ has been a long time coming. And we have not yet gotten to the parts where AI is doing something new.”
“It is important to note that none of AI and machine learning systems’ capabilities are novel,” he continues. “Rather, the current crop of AI systems are fulfilling the promises of AI from one and two generations ago. AI is not something that can be developed quickly – it is something that can only be built slowly, with incremental adjustments and developments. The advances we are seeing today are the product of seventy years of research, development, and failed attempts.”
While general intelligence is an interesting goal, it cannot be the only one. We can use AI for really interesting ends as it is now. We do not need to buy into the hype about AI that general intelligence is the future. There is a lot of possibility in applied or weak AI. But these benefits are being hidden by a hype cycle that is preventing this blossoming from happening.
Hartley says that the prevailing narratives about AI today are not directly related to the current developments in technology. Rather, they are themes continuing from decades and even centuries earlier. They are part of what Human Futures Studio calls the general AI myth. HFS has identified at least 10 components of this myth, such as the idea that if general AI is even possible, it will be eventually capable of replacing/modifying human activity. It also assumes the superiority of machines over humans and that intelligent machines will threaten humans in some way.
“Most of what matters regarding AI is about how humans intend to use it,” Hartley says. “Very little is about qualities inherent in the technology. This is because AI has not yet fulfilled the premise of becoming fully sentient. It has little agency of its own, even in its most sophisticated form. It is still just a tool.”
This is because there has never been, nor is there now, a true, intelligent AI system. Consequently, any story about these systems and their looming dangers are always set in the future, and cannot be anything other than fiction. The truth is we really don’t know.”
Hartley argues that AI’s true nature is hidden by a set of modern-day myths pushed by a hype machine serving its own interest rather than promoting it for what it could be as a tool. The hype regarding AI follows two narratives – the first about how technology is forging a better future through continual evolution and innovation. The second one is more dystopian, where the science fiction of the mid-twentieth century when people feared the atomic bomb, has infiltrated the usual technological narrative and given it a negative cast. Technological progress is seen as constant, forward-moving, and inevitable.
The most common part of the dystopian narrative is that AI will put people out of their jobs. But machines are not the cause of this problem, as they are just tools. It’s the businesses’ leaders choosing to make these changes in pursuit of growth, efficiency, and profit. These job losses are the result of a human-made choice to put AI rather than their fellow humans at the center of productivity.
“At HFS, we believe there is a choice here,” To understand the potential of a piece of technology, it is best to examine the user, tool, and task in relationship with each other since they are essentially working towards the completion of this task. We see the real danger here stemming not from the technology itself, but from how it is used, and what it is used for. It is people that are the potential problem, not the algorithm.”
For example, if a rogue state, a hacker, or an unscrupulous corporation builds an intelligent system that is dangerous, the dystopian outcome will be the fault of the people who created and used the machines, not the machines themselves. The potential harm to society and the human race exists only in how AI is used by other humans. Like other weaponized technologies, the only solutions HFS sees are adequate AI design, regulation, policing, and developmental directions.
“The answer lies not in AI itself, but in how it is designed, how it is used, and what limits we place on it at social and governmental levels. If we accomplish that, AI will become similar to how we manage other more obviously dangerous technologies, such as nuclear energy, cluster munitions, and poison gas, which are well-regulated to prevent them from falling into the hands of bad actors,” Hartley says.