- Published on
The Return of The Big Questions
- Authors
- Name
- by Zak El Fassi
There is something deeply unsettling about the conversation about AI and the future of humanity. There are plenty of existential questions but little information. It makes us feel small and insignificant – like COVID did.
We are entering a new era of mass illusion or disillusion – depending on where you stand. Questions currently being asked range from the future of jobs and human creativity to whether a Super Intelligence/AGI we accidentally create takes over the world and destroys humanity.
These questions are either deeply insightful, disturbing, or plain stupid.
Again, depending on where you stand.
I found a common thread that links all these AI-driven existential questions: they force us to question the meaning of life and the value that we bring to the table as beings across work, creativity, and humanness. Our subjective value-add to the Universe.
I see these existential questions in the digital realm the same way our ancestors looked up to the sky and saw the stars and needed to make sense of it all. Ancient religions emerged to make sense of that stellar display of planets and stars, leading them to create mythologies and stories that survive today.
Large AI Models (LAIMs) aren't any different than the stars that lit up our ancestors' skies. After all, don't the stars mean that we are physically insignificant? In the same way, LAIMs can make us feel mentally negligible and insignificant.
But there's a nuance: humans didn't create the stars but are creating these LAIMs.
This thought experiment opens a new philosophical exploration that could be at the root of it all: how much of human creation and creativity is anthropocentric (emerges from the human itself) versus stems from a "divine power" – an ultimate Creator?
The answers to the questions above fall depending on how you answer this question. If human creativity stems from divine power, so do LAIMs and everything they might enable through humans. After all, doesn't that mean they inherently don't have access to that divine power because only humans do? Under that lens, the emergence of LAIMs is inevitable, and they are likely to have a long-term positive benefit because they'll accelerate creativity and innovation, no matter what the current reading says.
If, however, human creativity doesn't stem from divine power and is instead a human feat, then yes, in that case, humans are the old gods. The AGI will create new ones, and they'll likely destroy us.
The telescope helped us make sense of the sky and the heavens and got us to a point where we started to see broader patterns that helped emerge the discipline of Science. Those same patterns are beginning to break down in the age of information acceleration and Large AI Models, which might require us to build new telescopes that provide a more holistic view.
Some of those telescopes are likely rooted in returning to the big fundamental questions and not shying away from asking them – even if the answer is based on personal or religious belief.
Is there anything more profoundly human than asking big questions and seeking answers? In the glow of these Large AI Models – our new stars in the digital night sky – we stand at the crossroads of technology and philosophy. They hold a mirror from which there is no hiding, forcing us to stare into the depths of our creativity and wrestle with our place in the Universe.
Whether these questions lead to discomfort or enlightenment, we should not cower from them. Instead, let's welcome them as prompts for meaningful introspection and conversations about our collective future. In the shadow of our most formidable uncertainties, we may uncover the most profound truths about ourselves. After all, isn't that the essence of human exploration – an experiment into the unknown that takes us back to ourselves?