Artificial intelligence is not new – it already permeates our lives by being embedded in the majority of digital devices, software and services that we use.
The knowledge and insights AI captures and presents are vast - we’re more informed and efficient than ever. Yet AI is only in its infancy. Initially, it just followed a series of logical rules based on a limited series of inputs. It has now evolved to train algorithms on how to identify patterns in data that enables reasoning and decision-making. Lots of work has already been done in mapping parts of the human thought process to make it possible to teach machines to think the same way.
According to an article written by @tanmay, it seems like machines could add a new and interesting dimension to the ideation process:
“Due to the ever-growing volume and detail of data available to help feed AI's capabilities, it is not unreasonable to expect that over time these systems are able to grow their “creative space” and come up with new possibilities before a human mind can.”
In the excellent international bestseller, Life 3.0, author Max Tegmark considers the transformational consequences that will occur if AI achieves consciousness and becomes a superintelligence. It may sound a little far fetched but the pace of advancement is leading experts to believe this may happen in our lifetime.
Already scientists at MIT have claimed a breakthrough in how human intuition can be added to an algorithm. Artificial Intuition (AN) is now a component of AI systems.
We have managed to achieve the seemingly impossible and found a way to sequence the human genome. Is it so difficult to imagine that capturing creativity in an algorithm is that far off? If we can generate artificial intuition, artificial empathy is a possibility – will technology become sophisticated enough to connect random thoughts in an interesting and meaningful way?
Kevin McCullagh from Plan believes that humans have a unique way of understanding the world that machines can never replicate. Here’s a slide from one of his recent presentations:
Yet, Kieran Kelly observes AI’s evolution in his blog post, The Coming Age of Augmented Creativity: “As the field of artificial intuition systems develops further, we will increasingly find that machines will able to exhibit emergent expertise and resultant creativity when it comes to problem solving.”
The addition of non-human expertise in the workforce could alleviate many standard HR headaches. While both people and machines are prone to breakdown if not looked after properly, here’s a list of advantages I can think of in having a robot on the team:
The increasing sophistication of AI is continually appreciated by consumers and is putting designers under pressure to keep up with demand and expectation. It’s amplifying the need for ingenuity. As AI pushes its boundaries and mimics the subtle nuances of human behaviour and understanding, it is likely to take on more and more creative legwork. Will this development push humans towards the role of curator rather than creator: guiding, elevating, pushing, pivoting and directing the final result?
Where will that lead? Will the options generated by a machine give us a more interesting and innovative range of choices or less imaginative ones? Do we start to become more rule-based or rule-breaking? Do we gravitate more towards patterns or will anomalies be more attractive? Will we be in a position to refute the evidence presented by a machine? Can a machine learn to be entrepreneurial and come up with the next Amazon, Uber, AirBnB?
According to Max Tegmark, the closer AI gets to artificial general intelligence, human lose their advantage over machines and need to start thinking how we want to co-exist with super-intelligent machines who will be capable of outthinking us on every front. He encourages us to consider whether we want there to be superintelligence and whether we’d prefer it to be a ‘benevolent dictator’, a ‘protector god’, an ‘enslaved god’ or ‘gatekeeper’.
As we continue to empower and advance technology, we must stay alert to the potential consequences and conscientiously plan for the kind of long-term future we want.As NYU Professor, Amy Webb, warns: "we’re heading into a situation in which systems will be making choices for us. And we have to stop and ask ourselves what happens when those systems put aside human strategy in favor of something totally unknown to us."