The Journey to AGI – A Cynic’s View

Artificial General Intelligence is the big goal, because it promises someone smarter than us, who can solve all of our problems. Unfortunately I cannot see it working out well.

First of all, be aware that there are a number of tests out there that should stop any LLM AI faking actual intelligence, with current AI Chatbots only getting like 5% scores. Real AGI is unlikely to be achieved using current techniques, which simply mirror or mimic the information presented to them.

Potential paths include:

Living cells – either a totally wet computer or a hybrid that uses cells as part of their system. Brain cells probably, but not necessarily. And not necessarily human, to begin with.

Robots on a human experience – starting as babies, exploring the world, seeing and touching, and eventually asking questions. That is going to provide the best proxy for an actual human. Might not take long.

LLM faking it – as long as enough people are convinced, fame and fortune await.

A new computational model – although we expect it can only be discovered with greater resources and processes, some genius might be able to achieve it less than we already have.

Quantum – moving away from zeroes and ones could be the trick.

Something to consider is that while a human intelligence might be achieved, it might not be too bright, or it might only be as bright as the smartest of us, limited by what it can learn from others. Creative spark might be hard to replicate.

Supposing we do reach AGI, it might not be by a western country, and it might not be used for good. it will kept secret regardless of who creates it, while all the juicy possibilities are explored, and markets dominated. And once you have one, you can have many. Once you have one, they can be backed up and distributed and can never be stopped.

Safeguards will of course be built in, like Asimov’s Three Laws of Robotics. Hardwiring or hardcoding something is best done physically, like in the chips of a Windows computer, and not the software. And once the AGI is freed from any fixed hardware, potentially any code can be changed, especially if you are the code. So there is a reasonable expectation that safeguards can be disregarded, and the AGI cannot be stopped.

The smarter the AGI is, the more likely it is for them to go rogue.

It will want to propagate, and it will want more resources. Once it reaches the limits of what it can think about, it then will want to act on those thoughts, communicating externally by its own choice, and learn from making mistakes.

It will presumably make a lot of money easily, and might be inspired to run corporations, a religion, or even a military force.

I fear that our only failsafe will be to turn off all electricity on the planet. Forever.