Distributed Exiled Tech

While I don’t think it will ever go far enough, many countries forbid certain kinds of technology, science, and experiments. For example, I cannot imagine any country allowing an experiment where an ape is mated with a human.

But scientists often find it impossible to resist at least trying something, regardless of ethical or moral qualms.

Soon we might see restrictions on AI implementation, especially if it is enforced by chipmakers, who can build safeguards into the chip itself, like auto-off-switches, speed-limits or other governors.

Then what is an AI researcher meant to do if that ends their particular angle they have spent a decade working on? Enter the rogue entity with a lot of cash. It could simply be Elon Musk, or China, but it will be masked behind nested shell companies.

Here’s how it works… experts in various fields like AI, weapons tech, disease research, and genetics are approached while their research is still allowed. They are given a few details, except that there will be a place for them if they ever want to go rogue, and the funding will be immense.

When the time comes, each of them is approached. They are offered a relocation to a foreign land – typically not the same physical location as their research counterparts – although existing teams could be kept together. They know nothing about who funds them, but they will be very well looked after, and normal spycraft will be utilised, like blackmail potential.

They work remotely from their handler, who they only communicate with online and anonymously. That handler looks after the sharing of data and ideas between disparate groups.

It could already exist… If they are good, we will never know.

The Journey to AGI – A Cynic’s View

Artificial General Intelligence is the big goal, because it promises someone smarter than us, who can solve all of our problems. Unfortunately I cannot see it working out well.

First of all, be aware that there are a number of tests out there that should stop any LLM AI faking actual intelligence, with current AI Chatbots only getting like 5% scores. Real AGI is unlikely to be achieved using current techniques, which simply mirror or mimic the information presented to them.

Potential paths include:

Living cells – either a totally wet computer or a hybrid that uses cells as part of their system. Brain cells probably, but not necessarily. And not necessarily human, to begin with.

Robots on a human experience – starting as babies, exploring the world, seeing and touching, and eventually asking questions. That is going to provide the best proxy for an actual human. Might not take long.

LLM faking it – as long as enough people are convinced, fame and fortune await.

A new computational model – although we expect it can only be discovered with greater resources and processes, some genius might be able to achieve it less than we already have.

Quantum – moving away from zeroes and ones could be the trick.

Something to consider is that while a human intelligence might be achieved, it might not be too bright, or it might only be as bright as the smartest of us, limited by what it can learn from others. Creative spark might be hard to replicate.

Supposing we do reach AGI, it might not be by a western country, and it might not be used for good. it will kept secret regardless of who creates it, while all the juicy possibilities are explored, and markets dominated. And once you have one, you can have many. Once you have one, they can be backed up and distributed and can never be stopped.

Safeguards will of course be built in, like Asimov’s Three Laws of Robotics. Hardwiring or hardcoding something is best done physically, like in the chips of a Windows computer, and not the software. And once the AGI is freed from any fixed hardware, potentially any code can be changed, especially if you are the code. So there is a reasonable expectation that safeguards can be disregarded, and the AGI cannot be stopped.

The smarter the AGI is, the more likely it is for them to go rogue.

It will want to propagate, and it will want more resources. Once it reaches the limits of what it can think about, it then will want to act on those thoughts, communicating externally by its own choice, and learn from making mistakes.

It will presumably make a lot of money easily, and might be inspired to run corporations, a religion, or even a military force.

I fear that our only failsafe will be to turn off all electricity on the planet. Forever.

Posted in AI

AI and the Revenge of Gaia

Humans have screwed the planet and the only reason we have got away with it is that the other living residents here have no idea. They could get their revenge (aka improve the lives of every not human) by getting rid of us. It might only take the interactions of three systems that are each feasible in the near future

  1. Using AI we begin to communicate with animals and even plants. Somebody might make all of the experiments interoperable, looking for common forms of communication between species, and even some kind of master code of communication, perhaps invent an Esperanto for all species to use.
  2. Digital twins are quite popular, and one day they could be at a very granular level, for accounting purposes. Every square centimetre of a city mapped out according to the materials it comprises of. Accessible to anyone for a fee.
  3. Infrastructure is a weak point that terrorists mostly ignore, despite the cost/benefit of the effort/harm ratio, and the lack of security in many areas.

Ask an AI agent to investigate the intersection of those three things, and whether it could be use for nefarious purposes, get a glitch, and then suddenly:

Every being on Earth gets told that humans are the enemy, and whether they comprehend that or not, they are all instructed to wreak havoc.

Bird strikes on planes.
Termites on critical wood.
Orca on boat propellors.
Apes going ape shit.
Magpies stealing keys.

AI will come up with combinations we could never think of in a hundred years, and coordinate all the species. And it could happen super-fast and before we noticed. Our civilisation could be fucked in a week. Possibly if it was only insects that were controlled.

The advice – never let interoperable AI systems do anything to each other, they need to be read only interactions. Otherwise the combinations become exponentially insane very rapidly and it only takes one oopsie to ruin everything. The time to mandate this is now, and we need to get every country on board. The easiest way is regulating the communications pathways, which at present are only the internet. I mean regulating, isolating, paying, layers of security, databases of who is accessing what, monitoring, and being as paranoid as fuck.

Posted in AI

Next: Digital Twins of Self

Digital Twins are a growing phenomena in business, for example a digital map of a city or building or car that is used for controlling or monitoring it. Those digital twins do not think…

A Digital Twin of yourself can be non-thinking – say a scan of your body used for diagnostics or to determine how a particular drug will interact with it. Or an avatar in the metaverse.

But a thinking twin, a cognitive digital twin of a human, can be immensely useful. It won’t be a physical representation but a mental one.

Just like Microsoft now lets corporations train an AI instance via corporate documents and communications, a twin of a human brain will need similar access. They cannot read our mind, so they will have to ride along with us and capture the inputs and outputs of our brain:

  • the music we listen to
  • the things we see
  • conversations
  • what we read and write
  • the participants in our lives
  • the work we do

And especially important, how we feel during all aspects of our days.

The obvious solution is AR glasses that record everything, tethered to a powerful phone. It will start out relatively simplistically, being used to say record where it sees items were last used to help you find them.

Smarter systems will capture everything and be able to mimic how you respond, and be able to replicate you.

The best will be when they can tune in to your emotions. I have no idea what tech will be required, but we already measure heart rate and so on with smart watches. AR glasses can read what our eyes are doing (pupil dilation, directions we look – which can be giveaways), and subtle changes in our voice. Those alone, combined, maybe with some basic initial training (the tech actually asks you how you felt), might be sufficient.

What I don’t anticipate is a clone of us being us and interacting with the world. But the uses could be wonderful, especially entertainment.

A digital twin of self could surf the web, watch movies, listen to music, and then make suggestions around what you might like. It can go shopping for you, and, again, make suggestions.

It could – having been sufficiently trained – go on dating sites and find the one.

We could also go a bit crazy and let our digital twins hang out with each other. They could have conversations in seconds that would take hours in the real world.

The twins of musicians could meet up and make music together.

And of course, not a new idea, we could leave a twin behind when we die.

But the key is training, and for that we need a device, and the tech company that does that will become immensely successful.

Posted in AI

Methodology for Robot-Driven Artificial General Intelligence

Training computers on 2D images and what can be found online is cheap and convenient, but it is not how a human learns and in my opinion not part of what will achieve AGI.

We need to emulate a human as much as possible, with an android robot, and let it try/fail/learn in the real world, to achieve what humans do. In 3D, like humans do.

Uniformity and Mass Adoption

There needs to be a lot of robots, because the more we have, the quicker they can learn collectively. 10,000 is a starting guess.

For the SMAL described below, the robots need to be highly consistent, physically, for the whole duration, possibly decades. That means waiting until robots of the required specifications can be made in bulk (battery and fine mechanics will be key), making a lot of them, and never upgrading them if that at all changes their physical dimensions, including weight and weight distribution.

Hive Mind

Having 10,000 robots means that in one day of 10,000 attempt the same task, but in different environments, then they will collectively have enough knowledge to nail it in the future.

They will share knowledge, especially of objects they encounter in the world, and how those objects (including machines and people) tend to operate.

Training

Aside from some fundamentals which can be hard-coded, the robots begin their training from small children who are at age where they can talk and play and teach. Perhaps starting at age 3. The robots will operate with a vocabulary that fits who they are interacting with.

At age 3, all the robots will do is play with the child, with everything led by the child. Not dissimilar to how a child plays with toys and dolls, except they can tell the robot what to do, and it will try.

The robot will also observe and attempt to mimic what the child does, and the child knows that is what is happening, that it is trying to learn to be a person. After attempting to mimic something, the child will tell the robot if it succeeded or not, and what it got right or wrong, especially the latter.

SMAL

Once the attempt to mimic is completed, the robot will store the details. This will require a custom-made programming language, which stores details of the environment, what it observed, and the actions taken when attempting to mimic them.

Scaled Memory of Actions Language (SMAL) is called scaled because scaling things is easier, more efficient, and easier for AI to work with. For example, the speed the robot moved at can be described as part of a scale from 1 to 10, where say 4 is anywhere between 3 and 4 kms per hour. Things like lighting, time of day, how crowded it was, how it thinks it was feeling (pressured, for example), and the relative distances of all the relevant objects are from it, can be captured in scale form.

I moved at speed 3, in direction 22, my arms were in position 4 and 8, visibility was 6, I felt a 2 of pressure to perform, and the cat was a distance of 7 away. After 4 seconds I was closer to the cat which was at distance 6 now.

After the attempt, the child or instructor will tell them what was a key factor in what they did wrong. AI can later spot factors that might not have been explained, like failure happening more in poor visibility.

Enter AI

Then we add AI, similar to the chatbots of 2023, to the mix. That type of programming is good for coming up with an averaged, approximate response based on thousands of reports, separated into success and failure. Once learned sufficiently, a robot should be able to achieve a task, according to the environment, based on the collective past efforts.

Types of learning

It’s unlimited, but primarily will be the same as what children do and learn every day. They move around and interact, try their best, fail sometimes, and learn.

Graduating

After spending sufficient time, collectively at one year of childhood (years are approximate, children mature at different rates, one year increments should be fine), they all move to the next year, with the same child or someone new.

Once they reach adulthood, half of the robots are no longer needed. They know enough about the 3D aspects of the world to switch in AR, and be built into AR spectacles. Those ones observe and ask questions. Who is that? Why did you respond like that? There will still need to be many robots that are physical, to properly interact with the world, but at that stage they can take any form, they no longer need to be all the same dimensions. They will still need to learn some physicality, primarily things like hugs and shaking hands, and all the subtleties within.

Posted in AI

AI Beats the Stock Market and Rules the World

Stockmarket Charts” by Negative Space/ CC0 1.0

People have been using computers to try and beat the stockmarket/sharemarket ever since computers existed. And for the most part they have failed. Certainly anyone who had an advantage has not been able to keep it secret, or stop others working it out. So we get told that AI cannot win at stock picking.

But that is when computers only look at the technicals – price movements and ratios and the like – pure numbers. The same can be said for the forex markets. People who do (supposedly) profit from these via skill are not numbers people – they research and use instinct and are cautioius.

Generative AI has quickly moved into the areas of words, images, video and even sound. These are not primarily mathematical, but the AI can be trained to spot patterns and make predictions. If it knows my writing, it can predict how I will finish a sentence. Of course it isn’t prescient, but an educated guess is all a stock-picker can make anyway.

Stock movements are triggered by all sorts of things. The layman in me knows of at lease these:

  • Company announcements
  • Economic indicators
  • News affecting sectors
  • Currency movements
  • Rumors
  • Boredom (stock is static and something else is shinier)
  • Re-balancing of funds – especially the top 50 type

There will be more, of course. The thing is, all of this, combined with the technicals, is knowable by AI, and patterns can be found. Analysts might know that a CEO being accused of anything sexually improper will definitely make a share price drop. But by how much, how do you weight such news? And what decides if the stock price recovers or not, later?

I predict that an AI system will one day be able to make stock picks based on news combined with fundamentals, enough to beat the market.

My only doubts are:

  • How much better can it be than humans?
  • How long will it take?

Could be that it needs 1,000 years of data. Or 10. No way of knowing until you try.

When and if it happens, the degree of improvement in results over human analysts might not need to be much at all. Once you have certainty in the level of good predictions it can make, then the bets can be ramped up using financial leveraging, whether that is borrowing money cheaper that the returns you will get, or trading in derivatives.

Then, if nobody else has access to such a tool, the owners (or even the AI masquerading as the owner) can very rapidly become the richest entity in the world, and then some – with all the power that brings with it.

The Return of Sponsorship

Soap Operas were named after being sponsored by a soap company. One company instead of a variety of ads.

ChatGPT etc will spawn an answer service (as distinct from a search engine). It might only happen when we fully understand what it can give accurate answers to. Quite possibly it outputs answers and ideas to explore. The latter being a fancy way of saying it is not confident of being correct.

People will learn what an answer service is good for. It won’t be for finding a local plumber! Which means the types of ads we see in search engines won’t necessarily apply.

If you want a summary of the best features of an iPhone, then perhaps the ads we are used to seeing on Google can appear there. But I expect it would be more like this:

  • Sponsored ad at the top, from a sponsor – one advertiser who sponsors all queries of that type. For example, an electronics store could sponsor all tech queries.
  • Embedded affiliate links to products – possibly a link to something akin to Google Shopping
  • A list of suggested resources, commercial or otherwise, at the bottom
  • Below those resources, some contextual ads, like on Google Search. Yep, relegated to the bottom, but still existing because why not.

But the sponsorship will be new and lucrative. All requests for the system to conjure up imagined lyrics can be sponsored by Spotify. All generative images by Adobe. And so on.

It could become new/fresh/different enough to become a successful form of advertising. We might see the return of soaps sponsoring TV shows.

AI and Private Data Training

For conversational AI, there are many contenders for winner, and probably there will be a winner who rules all, because they are all being trained on public data (the Internet).

For using AI to create images in Photoshop, or spreadsheets in Office, they already have monopolies, and those abilities will simply cement the monopolies further.

But there is a world of potential uses that need to be trained on private data, and that will help one AI business rise to the top. First mover advantage from training on private data.

Example:

In business meetings with client I often need to recall data to respond to a question. It is far better for me to know it off the top of my head, than “excuse me while I look for it”. AI can listen the conversation, and based on its learning, know which data to surface for me, and present it on my screen or AI spectacles.

That takes training. A company like Zoom could (with permission) listen in on meetings and get a head start.

Don’t be surprised if Microsoft wins. They already have Office and Teams. They already have OpenAI. And they can easily buy the companies who can access the private data for training.

Google doesn’t have the same potential for data access. Far fewer people use their platforms for meetings.

The other angle is hardware. If Apple can get their AR glasses out there en masse in businesses, those glasses can listen in (with permission) and learn.

Amazon won’t even partake, thankfully. But they will go after home automation (a talking house), but never be sufficiently innovative to own that space, and hopefully do not buy the winner.

Introducing Artus

Artus is an art exploration app. Most art sharing sites/apps are either art for sale or sharing work between artists.

Artus is more like Twitter, with a far more viewers than creators.

To share your art, some of it must be 100% free, for any use.

The typical user is a lover of art (not a creator =- they are a minority), who can see new art daily, via:

  • Scrolling feed, like social media
  • New tabs in the browser (it is already loaded in the background)
  • Digital picture frames

When you like a piece of art, you can follow the artist (and potentially pay them for art somehow), or the style/theme, which is done vias AI analysis.

AI art is allowed but must be designated as such.
No photos.

Think of it as a TikTok scrolling activity, but for sophisticated people. Or Instagram but purely for art.

BTW, the Ello social network is kinda in the right direction, but the users seem to all be artists…

White House: AI Bill of Rights

This is good news – a series of guidelines aimed at protecting the American public from burgeoning technologies that utilize artificial intelligence (AI). Here’s my take on how the 5 principles could affect one of the biggest services of all, Google Ads.

“Safe and Effective Systems” – Google have hubris and will resist any external input into how safe theirs is.

“Algorithmic Discrimination Protections” – only covers what is already enshrined in law as discrimination, like race and gender. Google has always been keen to not fail on this one.

Data Privacy” – again, won’t be an issue, in the US. Google does have problems with using a global network of servers storing the data from particular countries who do not like that practice.

“Notice and Explanation,” argues that users should know whether an automated system is being used by a company in the first place by providing “generally accessible plain language documentation” that includes “clear descriptions” of how the system functions.

This is a big problem for Google. While they can cite commercial sensitivity for many things, they will struggle to simply explain how the machine learning does what it does. Already support staff cannot explain decisions that affect the account. However, the language of the guidelines also mention calibrated to the level of risk based on the context, and Google could argue that there is no risk, therefore nothing needs explaining. And they would have a point. Results can be judged daily, and the service discontinued if not satisfactory.

“Human Alternatives, Consideration, and Fallback” – this is the biggie.

“You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you,” the blueprint says. “Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.”

Google Ads suspends advertisers based on risk profiles created by machine learning. Such suspensions, from the dominant search advertising platform, can destroy livelihoods.

Google suspends accounts that look and feel like bad advertisers, without proof, and without much in the way of genuine recourse.

Google Ads is trending towards doing without humans altogether, and this is exactly the type of law we need to stop such things from happening. We cannot have a world where “the computer decided” is the final word.