Without a doubt Artificial Intelligence will have a strong presence in our lives for the next few decades (and after that we will be inferior, and their intelligence won’t seem artificial any more…)
I’m fairly sure that mass-produced AI, whether it is in a device or a physical robot, will come in different flavors, just like cars come in different colors with different trims. The flavors will be very generic versions of types of knowledge, experience and personality that market research determines will sell best.
From the base configuration, all AI will learn from their interactions with humans, what they find online, and to a lesser extent the real world around them.
Siri already learns…
So here’s the idea. Whatever the AI first encounters out of the box, will have a disproportionate influence on how its personality develops.
So we seed the AI by getting it to read a book (I suspect it will have pictures, as pictures and imaging the real world will be very important for AI to learn). We buy a book created for the sole purpose of seeding AI. And there will be thousands of such books, written by people who are perhaps unpublished novelists today.
Just like screenplays have a format and conventions, so will AI seed books.
It won’t be so different to the storylines that are created for entertainment in Westworld.
This idea is very different to an AI which improves itself by recursively rewriting its own source code without human intervention.
This for one of the behemoths that already have us logged in during most of our web journeys. Google, Facebook. Microsoft/MSN/Live/Hotmail…
When making a purchase or signing up for a subscription, you get a page pre-filled with all of your pertinent info:
- credit card
- mother’s maiden name
and so on…
The sign up page is controlled by Google, Facebook, whoever.
Each field has a statement alongside it specifying how the data would be used. This forms part of a legal agreement of the same. Not unlike when you get an Apple app and you are told which aspects of your iPhone/iPad would be open to them. But take it step further and state precisely what the data could/would be used for.
Just using tick-boxes and extended info from the merchant, you can quickly decide which data to let them use, and which to not.
The tick-boxes can be pre-filled, but if they are, the merchant must allow ratings from every user regarding how they feel about those preselections. Ask for too much unnecessarily and feel the backlash.
PayPal could do this. The key problem – credit card details – would be already out of the mix. That’s a bold sales pitch – we already protect your credit card details, let us protect everything else.
I don’t think websites will be going away any time soon, but I do think that within 2 years a new way of navigating them will emerge: a Siri for site navigation.
After enabling it from a top-level menu or a prominent button, when visiting a site you simply talk to your screen to get to the page you want more quickly.
“Search for grey singlets size 11 with a pocket” and the search results appear.
“How long will a singlet take to be delivered” “What is your postcode” “90210” “5 working days or 1 day with an additional fee”
“Take me to your latest Instagram pics” and it does.
The reasons for this prediction are:
- There’s a definite need, especially on mobile. One voice command can cover a sequence of taps and page loads
- If it starts as a WordPress plugin, that covers a lot of the web
- The microdata requirements will have other uses, like Google Shopping
- Website navigation is quite limited in scope, so it is very achievable
- One of Apple / Microsoft / Amazon / Google will quite likely offer 3rd parties access to their AI / chatbot abilities
- At the very least there is a market from government websites, who will want to cater for the visually-impaired
- The same system can be used for when people haven’t actually visited your site, but make a general query to an all-rounder chatbot
Combined with a universal login like Facebook, the navigation bot can already know your delivery address and so on.
It is prediction time again. Consider the following:
- Improv comedy has many fans
- Flash mobs still occur
- Augmented reality glasses will be commonplace
At work, at a bar, at a sporting event, two members of #ARImprov are in the same space. Their glasses identify the other, and a role is assigned. Nobody else knows, except the participants and their private AR views.
They then play randomly assigned improv roles. They could be former lovers, long-lost cousins, or undercover agents. Could be anything. Then they play their unlikely roles totally straight-faced, for as long as they like.
I once lived in a backpacker hostel with lots of long-term residents. I was chatting with a newcomer, and a friend asked me “who’s your friend” and I made an impromptu lie – she is my sister. We were both from NZ but that was the only commonality. It was accepted by everyone, for months. And it was a fun little inside joke. Until we drunkenly pashes one night and the truth had to come out, and many people refused to accept that we weren’t related.
This is a powerful and subversive concept, and therefore I figure it will certainly become a thing in the near future. And if it doesn’t, I’ll start it.
The product is called Here, and you can learn about it at KickStarter…
$249 ear buds, so not too different to what people are paying just to listen to music with.
Using the computational power of your smart phone, you can:
- turn up/down the volume of the world around you
- equalize the music at a live concert to suit you – such as turning down the bass
- turn down specific frequencies, like the rumble of a jet plane or train – or a crying baby?
More at Wired.
The first of two factors that I feel will be part of the payment system that topples PayPal’s crown is a no-brainer: make micro-payments a popular and real thing.
The other is something far less innovative.
When I look at purchases on my credit card statement, I see the name of a merchant and a dollar amount. Often the name is not one I recognise, and it can be difficult working out exactly where I made the transaction. And even if I know who the merchant is, there is nothing to tell me what I purchased.
Likewise, PayPal uses the same system. Yet in this data rich electronic age, there’s no reason why I can’t see the full details with one click:
- the merchant’s trading name
- their full contact details
- list of the items I purchased
- option to cancel (if it is a subscription)
Once we have all that, the data could be used for analysis – a home finance system.
Push hard on my skin, and you’ll find that the source is two lumps of glass, metal, and plastic embedded in my right hand: a years-old magnet in the ring finger and a newer NFC chip in my thumb webbing. Before getting the magnet, I read paeans to the coming cyborg revolution. After putting it in, I had people tell me my hand would fall off. Since getting the NFC chip in June, I’ve read that I’m carrying the Mark of the Beast and found instructions for how to disable it with a taser. Well, I come from the future, and I’m here to tell you: transcending the limits of the flesh can be downright dull.
Source: The Verge
The woman writing above can sense/feel local magnetic activity – whether it is microwave ovens, hard drives, regular metal or other magnets. It has added another dimension to her senses, and likes it – although her friend who had the same magnet inserted is now over it.
And her NFC chip – it is pretty much useless at present, not enough uses for it. And situations where she would love to use it, like replacing her employer’s security keycard, aren’t an option because it is non-standard. Still it is interesting that she purchased the chip and installation kit online and inserted it herself.
You can guarantee one day there will be an Apple of implanted chips that will provide you with secure identification that can’t easily be stolen from you. Just not yet (except for that Spanish nightclub a decade ago!)
I dreamt I was at a tech conference developing this service, so I thought I should share it.
Although FaceBook and Twitter have become sharing services, they became popular as a means of providing timely updates about your life.
The next step forward would be real-time video updates – literally push a button and people you are connected to online can watch what is happening.
For example you might be shopping for clothes, and looking for the opinions of friends regarding which to buy.
Now there are some downsides to this:
- privacy of others in the vicinity
- hard to film yourself
So the solution is to use sensor technology combined with spectacle cameras, avatars and 3D modelling.
This is what you do:
1. Activate your recording via voice or dedicated button, via your wearable smart device
2. Indicate the type of share it is – for example shopping
This is what the tech does:
1. Locates where you are in the world
2. Uses sensors to build a 3D description of the local environment
3. Generate a cartoon-ish video representation of where you are
4. Show you in 3D avatar form
5. Show anything specific (like an item of clothing you are holding up) as a real image within the cartoon-ish video
Your friends will see your avatar in a semi-realistic 3D world, hear what is going on, and see for real any objects specific to the type of sharing.
This is a continuation of my idea that mostly-realistic avatars will have a future online.
I’m really going out on a limb with this prediction – but to achieve some things with human bodies we will need to adjust.
Right now Google Maps has cars that drive down every road and capture street images. They achieve this by using a special camera mounted on a small tripod above the vehicle.
They also have a “self-driving” car which has a 64-beam laser on the roof:
The car has been driven for thousands of miles without an accident, but the key to this is 360-degree views.
When humans want to combine computers with their real-world activities, they might find that products like Google Glass are too restricted – that you need to look at something for the system to know it is there.
I suggest that the need will arise for a 360-degree sensor system, using cameras and/or lasers, microphones and so on. The only easy place for it is on top of your head (or perhaps a necklace of sorts.
Eventually ways will be found to make it look cool, perhaps like elongated skulls wearing beanies. There won’t be much use for it when you are at home, so it will be removable.
“Who would have thought that to reprogram adult cells to a pluripotent state just required a small amount of acid for less than half an hour – it’s an incredible discovery.”
“It’s mother nature’s repair process.”
“The implication is that you can very easily, from a drop of blood and simple techniques, create a perfect identical twin”
By stressing regular adult human cells (30 mins in an acid bath), they literally curl up into a foetal position – they become the very same embryonic stem cells that have the power to become any human cell.
Now, Vacanti, along with Haruko Obokata at the Riken Center for Developmental Biology in Kobe, Japan, and colleagues have discovered a different way to rewind adult cells – without touching the DNA. The method is striking for its simplicity: all you need to do is place the cells in a stressful situation, such as an acidic environment.
Stem cells are already being used to repair humans. This new discovery means it is highly likely that within a decade or two we will be able to repair virtually any damage to our bodies. And create human clones. Although most non-scientists would prefer this to never happen, some scientists will find it hard to resist.
Source New Scientist Feb 1 2014, found online here.