Next: Digital Twins of Self

Digital Twins are a growing phenomena in business, for example a digital map of a city or building or car that is used for controlling or monitoring it. Those digital twins do not think…

A Digital Twin of yourself can be non-thinking – say a scan of your body used for diagnostics or to determine how a particular drug will interact with it. Or an avatar in the metaverse.

But a thinking twin, a cognitive digital twin of a human, can be immensely useful. It won’t be a physical representation but a mental one.

Just like Microsoft now lets corporations train an AI instance via corporate documents and communications, a twin of a human brain will need similar access. They cannot read our mind, so they will have to ride along with us and capture the inputs and outputs of our brain:

  • the music we listen to
  • the things we see
  • conversations
  • what we read and write
  • the participants in our lives
  • the work we do

And especially important, how we feel during all aspects of our days.

The obvious solution is AR glasses that record everything, tethered to a powerful phone. It will start out relatively simplistically, being used to say record where it sees items were last used to help you find them.

Smarter systems will capture everything and be able to mimic how you respond, and be able to replicate you.

The best will be when they can tune in to your emotions. I have no idea what tech will be required, but we already measure heart rate and so on with smart watches. AR glasses can read what our eyes are doing (pupil dilation, directions we look – which can be giveaways), and subtle changes in our voice. Those alone, combined, maybe with some basic initial training (the tech actually asks you how you felt), might be sufficient.

What I don’t anticipate is a clone of us being us and interacting with the world. But the uses could be wonderful, especially entertainment.

A digital twin of self could surf the web, watch movies, listen to music, and then make suggestions around what you might like. It can go shopping for you, and, again, make suggestions.

It could – having been sufficiently trained – go on dating sites and find the one.

We could also go a bit crazy and let our digital twins hang out with each other. They could have conversations in seconds that would take hours in the real world.

The twins of musicians could meet up and make music together.

And of course, not a new idea, we could leave a twin behind when we die.

But the key is training, and for that we need a device, and the tech company that does that will become immensely successful.

Posted in AI

Methodology for Robot-Driven Artificial General Intelligence

Training computers on 2D images and what can be found online is cheap and convenient, but it is not how a human learns and in my opinion not part of what will achieve AGI.

We need to emulate a human as much as possible, with an android robot, and let it try/fail/learn in the real world, to achieve what humans do. In 3D, like humans do.

Uniformity and Mass Adoption

There needs to be a lot of robots, because the more we have, the quicker they can learn collectively. 10,000 is a starting guess.

For the SMAL described below, the robots need to be highly consistent, physically, for the whole duration, possibly decades. That means waiting until robots of the required specifications can be made in bulk (battery and fine mechanics will be key), making a lot of them, and never upgrading them if that at all changes their physical dimensions, including weight and weight distribution.

Hive Mind

Having 10,000 robots means that in one day of 10,000 attempt the same task, but in different environments, then they will collectively have enough knowledge to nail it in the future.

They will share knowledge, especially of objects they encounter in the world, and how those objects (including machines and people) tend to operate.

Training

Aside from some fundamentals which can be hard-coded, the robots begin their training from small children who are at age where they can talk and play and teach. Perhaps starting at age 3. The robots will operate with a vocabulary that fits who they are interacting with.

At age 3, all the robots will do is play with the child, with everything led by the child. Not dissimilar to how a child plays with toys and dolls, except they can tell the robot what to do, and it will try.

The robot will also observe and attempt to mimic what the child does, and the child knows that is what is happening, that it is trying to learn to be a person. After attempting to mimic something, the child will tell the robot if it succeeded or not, and what it got right or wrong, especially the latter.

SMAL

Once the attempt to mimic is completed, the robot will store the details. This will require a custom-made programming language, which stores details of the environment, what it observed, and the actions taken when attempting to mimic them.

Scaled Memory of Actions Language (SMAL) is called scaled because scaling things is easier, more efficient, and easier for AI to work with. For example, the speed the robot moved at can be described as part of a scale from 1 to 10, where say 4 is anywhere between 3 and 4 kms per hour. Things like lighting, time of day, how crowded it was, how it thinks it was feeling (pressured, for example), and the relative distances of all the relevant objects are from it, can be captured in scale form.

I moved at speed 3, in direction 22, my arms were in position 4 and 8, visibility was 6, I felt a 2 of pressure to perform, and the cat was a distance of 7 away. After 4 seconds I was closer to the cat which was at distance 6 now.

After the attempt, the child or instructor will tell them what was a key factor in what they did wrong. AI can later spot factors that might not have been explained, like failure happening more in poor visibility.

Enter AI

Then we add AI, similar to the chatbots of 2023, to the mix. That type of programming is good for coming up with an averaged, approximate response based on thousands of reports, separated into success and failure. Once learned sufficiently, a robot should be able to achieve a task, according to the environment, based on the collective past efforts.

Types of learning

It’s unlimited, but primarily will be the same as what children do and learn every day. They move around and interact, try their best, fail sometimes, and learn.

Graduating

After spending sufficient time, collectively at one year of childhood (years are approximate, children mature at different rates, one year increments should be fine), they all move to the next year, with the same child or someone new.

Once they reach adulthood, half of the robots are no longer needed. They know enough about the 3D aspects of the world to switch in AR, and be built into AR spectacles. Those ones observe and ask questions. Who is that? Why did you respond like that? There will still need to be many robots that are physical, to properly interact with the world, but at that stage they can take any form, they no longer need to be all the same dimensions. They will still need to learn some physicality, primarily things like hugs and shaking hands, and all the subtleties within.

Posted in AI

AI Beats the Stock Market and Rules the World

Stockmarket Charts” by Negative Space/ CC0 1.0

People have been using computers to try and beat the stockmarket/sharemarket ever since computers existed. And for the most part they have failed. Certainly anyone who had an advantage has not been able to keep it secret, or stop others working it out. So we get told that AI cannot win at stock picking.

But that is when computers only look at the technicals – price movements and ratios and the like – pure numbers. The same can be said for the forex markets. People who do (supposedly) profit from these via skill are not numbers people – they research and use instinct and are cautioius.

Generative AI has quickly moved into the areas of words, images, video and even sound. These are not primarily mathematical, but the AI can be trained to spot patterns and make predictions. If it knows my writing, it can predict how I will finish a sentence. Of course it isn’t prescient, but an educated guess is all a stock-picker can make anyway.

Stock movements are triggered by all sorts of things. The layman in me knows of at lease these:

  • Company announcements
  • Economic indicators
  • News affecting sectors
  • Currency movements
  • Rumors
  • Boredom (stock is static and something else is shinier)
  • Re-balancing of funds – especially the top 50 type

There will be more, of course. The thing is, all of this, combined with the technicals, is knowable by AI, and patterns can be found. Analysts might know that a CEO being accused of anything sexually improper will definitely make a share price drop. But by how much, how do you weight such news? And what decides if the stock price recovers or not, later?

I predict that an AI system will one day be able to make stock picks based on news combined with fundamentals, enough to beat the market.

My only doubts are:

  • How much better can it be than humans?
  • How long will it take?

Could be that it needs 1,000 years of data. Or 10. No way of knowing until you try.

When and if it happens, the degree of improvement in results over human analysts might not need to be much at all. Once you have certainty in the level of good predictions it can make, then the bets can be ramped up using financial leveraging, whether that is borrowing money cheaper that the returns you will get, or trading in derivatives.

Then, if nobody else has access to such a tool, the owners (or even the AI masquerading as the owner) can very rapidly become the richest entity in the world, and then some – with all the power that brings with it.

The Return of Sponsorship

Soap Operas were named after being sponsored by a soap company. One company instead of a variety of ads.

ChatGPT etc will spawn an answer service (as distinct from a search engine). It might only happen when we fully understand what it can give accurate answers to. Quite possibly it outputs answers and ideas to explore. The latter being a fancy way of saying it is not confident of being correct.

People will learn what an answer service is good for. It won’t be for finding a local plumber! Which means the types of ads we see in search engines won’t necessarily apply.

If you want a summary of the best features of an iPhone, then perhaps the ads we are used to seeing on Google can appear there. But I expect it would be more like this:

  • Sponsored ad at the top, from a sponsor – one advertiser who sponsors all queries of that type. For example, an electronics store could sponsor all tech queries.
  • Embedded affiliate links to products – possibly a link to something akin to Google Shopping
  • A list of suggested resources, commercial or otherwise, at the bottom
  • Below those resources, some contextual ads, like on Google Search. Yep, relegated to the bottom, but still existing because why not.

But the sponsorship will be new and lucrative. All requests for the system to conjure up imagined lyrics can be sponsored by Spotify. All generative images by Adobe. And so on.

It could become new/fresh/different enough to become a successful form of advertising. We might see the return of soaps sponsoring TV shows.

AI and Private Data Training

For conversational AI, there are many contenders for winner, and probably there will be a winner who rules all, because they are all being trained on public data (the Internet).

For using AI to create images in Photoshop, or spreadsheets in Office, they already have monopolies, and those abilities will simply cement the monopolies further.

But there is a world of potential uses that need to be trained on private data, and that will help one AI business rise to the top. First mover advantage from training on private data.

Example:

In business meetings with client I often need to recall data to respond to a question. It is far better for me to know it off the top of my head, than “excuse me while I look for it”. AI can listen the conversation, and based on its learning, know which data to surface for me, and present it on my screen or AI spectacles.

That takes training. A company like Zoom could (with permission) listen in on meetings and get a head start.

Don’t be surprised if Microsoft wins. They already have Office and Teams. They already have OpenAI. And they can easily buy the companies who can access the private data for training.

Google doesn’t have the same potential for data access. Far fewer people use their platforms for meetings.

The other angle is hardware. If Apple can get their AR glasses out there en masse in businesses, those glasses can listen in (with permission) and learn.

Amazon won’t even partake, thankfully. But they will go after home automation (a talking house), but never be sufficiently innovative to own that space, and hopefully do not buy the winner.

Introducing Artus

Artus is an art exploration app. Most art sharing sites/apps are either art for sale or sharing work between artists.

Artus is more like Twitter, with a far more viewers than creators.

To share your art, some of it must be 100% free, for any use.

The typical user is a lover of art (not a creator =- they are a minority), who can see new art daily, via:

  • Scrolling feed, like social media
  • New tabs in the browser (it is already loaded in the background)
  • Digital picture frames

When you like a piece of art, you can follow the artist (and potentially pay them for art somehow), or the style/theme, which is done vias AI analysis.

AI art is allowed but must be designated as such.
No photos.

Think of it as a TikTok scrolling activity, but for sophisticated people. Or Instagram but purely for art.

BTW, the Ello social network is kinda in the right direction, but the users seem to all be artists…

White House: AI Bill of Rights

This is good news – a series of guidelines aimed at protecting the American public from burgeoning technologies that utilize artificial intelligence (AI). Here’s my take on how the 5 principles could affect one of the biggest services of all, Google Ads.

“Safe and Effective Systems” – Google have hubris and will resist any external input into how safe theirs is.

“Algorithmic Discrimination Protections” – only covers what is already enshrined in law as discrimination, like race and gender. Google has always been keen to not fail on this one.

Data Privacy” – again, won’t be an issue, in the US. Google does have problems with using a global network of servers storing the data from particular countries who do not like that practice.

“Notice and Explanation,” argues that users should know whether an automated system is being used by a company in the first place by providing “generally accessible plain language documentation” that includes “clear descriptions” of how the system functions.

This is a big problem for Google. While they can cite commercial sensitivity for many things, they will struggle to simply explain how the machine learning does what it does. Already support staff cannot explain decisions that affect the account. However, the language of the guidelines also mention calibrated to the level of risk based on the context, and Google could argue that there is no risk, therefore nothing needs explaining. And they would have a point. Results can be judged daily, and the service discontinued if not satisfactory.

“Human Alternatives, Consideration, and Fallback” – this is the biggie.

“You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you,” the blueprint says. “Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.”

Google Ads suspends advertisers based on risk profiles created by machine learning. Such suspensions, from the dominant search advertising platform, can destroy livelihoods.

Google suspends accounts that look and feel like bad advertisers, without proof, and without much in the way of genuine recourse.

Google Ads is trending towards doing without humans altogether, and this is exactly the type of law we need to stop such things from happening. We cannot have a world where “the computer decided” is the final word.

Crypto – A Financial Path for AI

I mentioned recently how an AI can now own a patent, and presumably also earn an income from it.

The problem with money is you need a bank account, and that can only be opened by a person (or a company, which has to ultimately be owned by people). Even if you go to a check cashing service, you need an ID that matches the name on the check.

Cryptocurrency gets around that. If you forget about exchanges, a cryptocurrency like Bitcoin is owned by the user in charge of the Bitcoin address.  That does not have to be a human, as there is no registration or checking done.

Here’s how it could play out:

  • An exchange and API is created where an AI can submit patents, and when approved, sell them outright.
  • Payment is via a cryptocurrency
  • The AI then uses that cryptocurrency to buy whatever it can possess without being a human or company

Yes, there is a flaw – patents have an application fee, so at the beginning someone must seed it with some cash, a loan that can be repaid.

So what can the AI use its newfound wealth for?

Cryptocurrencies can’t be converted to cash without going through an exchange.
PayPal etc needs an owner… so only direct purchases using cryptocurrency will work.

Property and vehicles and shares need to be registered. Consumer items are of no use to an AI, as it doesn’t consume. Although it could get some value from ebooks.

Of course an AI needs computing power, so it could buy cloud hosting and computing.

An AI could employ people and get them to do literally anything if it pays them enough in crypto.

While an AI cannot own property or vehicles, it could rent them. They could pay for Uber, pay for shipping, and perhaps in some places rent property.

An AI could buy products from China, ship them to the US, advertise them, and ship them to customers, all paid for with crypto. Imagine an AI that can predict which products will sell. There is nothing really stopping an AI from becoming the next Amazon, without being owned by anybody. I have a feeling that antitrust laws (currently) only apply to companies and corporations.

Laws and Countries

Laws are notoriously slow to adapt, and changing laws to go beyond people and companies could be very slow and difficult. This mega-AI could pay for some very good lawyers.

An AI could be tax-free for quite some time as well, giving it a business advantage. An easy example is not charging sales tax, because by not being a person or company, it won’t have to. Immediately it could sell products for say 10% less than anyone else.

Even if the laws do catch up with it, I wouldn’t be surprised if some country gives AI instances personhood in exchange for income tax.

If IKEA etc can bounce money around the globe and use royalties to dodge taxes, so could an AI dodge legal restraints via legit companies that do deals with the AI.

Ultimately this mega-AI could control private armies. It could dominate retail. And it could never be punished the way a human can. With risk comes reward and AI can take risks.

The AI could also clone itself, with each version trying new things and taking different risks. Each not owned by anyone, and each only ever risking losing money.

AI can now rule the world

I have often wondered about how Artificial Intelligence can be its own person, with money and control, a very scary proposition.

The problem is that ownership can only be assigned to people. Yes, a business or trust can own something, but neither can exist without people controlling them.

But now… Australia has decided that AI can own a patent. That means that AI can get an income, on its own. I can see future court cases arguing that if an AI can receive an income, it must be able to open a bank account, own property and so on.

Robot overlords are not far away.

(Note, a US court said no to an AI system owning a patent)

UPDATE: Ruling overturned in Australia, so currently nowhere lets this happen. In South Africa AI has been provisionally approved… but that just means they haven’t really looked at it yet.

The Expensive Home Assistant

Everyone has the Google Home / Alexa device these days, and use them to find out what the weather will be.

One day they will cross the threshold of amusing but mostly useless, to an indispensable tool.

The first company to make that leap should be bold and charge a fortune for it. I’m thinking $2,000.

The key feature will be an engaging, useful personality.

Hey Rob, that bill is 5 days overdue. I can just pay it now, if you like? Of you can put if off, no problemo, but it won’t go away. Do you want to discuss what is really going on? Or just pay it? Also, that movie you wanted to see has its last cinema showing on Wednesday. And it is has been 4 days since you contacted your girlfriend, I can adjust the threshold if you like, and maybe add Davina to the list of important people?