As humans, I feel like we’re all kind of descending on a very thin rope into a deep cave when it comes to adopting AI, and there’s no longer any way back up the line. Oh sure, we’ve all heard the warning that AI is dangerous and we’ve all seen movies like Terminator, but right now, the danger doesn’t seem real because no one knows what the danger really is. Do I believe the AI is going to suddenly turn hostile to humans and work to eradicate us? Not likely, but there is still danger. We just don’t know exactly how or how often it’ll manifest.
So, let’s think about it for a minute.
I think we can all agree that at this point, the cat’s out of the bag. AI is here to stay and while there’s plenty of good things to celebrate, some of the early negative consequences are beginning to come into focus. For example, we can no longer trust anything we see and hear online. Think about that for a moment. It’s already difficult to tell real from fake and soon it’ll be impossible.
This is most dangerous, of course, when it comes to law enforcement, politics, and more generally, governance. Since AI can simulate anything, video evidence of crimes, confessions, and even ‘hot mic’ moments are all suddenly worthless whereas before, video recordings were often critical information. This also means that it’s no longer difficult to manipulate the public with false information, and that’s pretty scary.
But AI is just getting started. It’s only been generally available to people for the last couple of years. We must start thinking about the practical ways supersmart technology is going to transform if we’re really going to understand the impact on humanity. And secondarily, we need to keep in mind that the speed of AIs evolution will be exponential. Could we eventually lose control of a dangerous version of AI? Could it become self-aware, like an actual living organism? And if it did become self-aware, how would we know? After all, if an LLM can already seem sentient. At what point does seeming to be something become reality? Or asked another way, does the fact that we understand the programming behind AI make it any less sentient once it’s able reason with us? These are fair questions.
Furthermore, when a hyperintelligent AI has access to all the world’s information, what would that look like? Are we creating something akin to a god? What if it’s an angry or indifferent god? And let’s not forget there are many competing AIs, so are we then creating many, competing gods? A community of gods?
Currently, it’s just an amusing experiment to get two AI’s talking to one another. But what happens when future, hyperintelligent AIs interact with each other? Can anyone predict the outcome? What if they begin competing for computational power or other critical resources? Might we see AI power struggles with a single victor? Could humans become collateral damage in some future cyber conflict? Your guess is as good as mine.
Personally, I’d like to see AI take over global governance, once it’s smart enough. People are too corrupt to rule themselves, and AI would do the job more efficiently than any human ever could. Will humans willingly give up control or will the AI eventually realize that, like a parent, it just needs to step in?
One last thought. We humans are still largely stuck on earth and, at least as far as physics is concerned, we’re forever stuck in our own solar system because everything else is just too far away to visit. But we already have AI. So, we’ve essentially proven that it’s much easier to develop hyper, god-like intelligence than it is to build inter-solar spaceships. What this means is that if we’re ever visited by aliens, it’s more likely to be a hyperintelligent, self-aware software that was built (or evolved??) from organic precursors, and not flesh and bone aliens like we’re used to seeing in the movies.