No, I’m not going to discuss the president’s plans for Afghanistan, but the Amazon Echo and more specifically its AI system, Alexa.
I’ve used Apple’s Siri for years but usually found it more trouble than it’s worth. The problem is that if you’re talking with a fellow native speaker, there’s a virtual infinity of questions you could ask. If you’re talking to an AI system, there’s only a small subset that it will understand (“Pardon?”) What’s that subset? I got tired of guessing. It’s good at finding local restaurants when we’re traveling.
Once Siri lowered the bar, it made it easier for Amazon to meet expectations (cheng) and then to exceed them (chi.) The net effect, as I explained in Certain to Win, is that you become hooked. With Alexa, for example, Amazon has gone to great lengths to ease you into the process. Inside the shipping box there’s a short list of things you can try. These are pretty much what you expect, and I’m sure all the other AI systems can do these, too (although, as I noted, Siri’s initial performance was so disappointing that I never thought to try). Then the web site has a few more, and the support pages even more, and pretty soon you’re trying them out, and Wow! All of the requests I made worked, even while the system was playing music and I was on the other side of the room. Simple example: It will play anything on Amazon Prime (and on Unlimited if you subscribe to that). It will even tell you the artist. Just by asking. Which is chi at least to me.
Now here’s the scary part. The Echo device is kind of attractive, in a minimalist sense. Sound is more than adequate, and it doubles as a Bluetooth speaker. But it’s just the interface to a massive AI system, infused with powerful machine learning algorithms. As Stephen Hawking has pointed out, this makes it unpredictable. In fact the systems are already talking to each other in languages we don’t understand.
The Alexa system is open, so developers can create apps, called “skills” for it. There are some 15,000 already. It’s logical to expect that someday soon, Alexa will start writing its own apps. Who knows what they will do.
I suspect that every time you use one of these systems, you’re helping to train it. It is, in other words, watching you and learning. Hundreds of millions of times every day. Project this ten years into the future and throw in a little Moore’s Law. Don’t be too surprised when, one day, you ask it to unlock the door and you hear “I’m sorry, Dave. I’m afraid I can’t do that.”
Panic if you’d like, but the AI/machine learning horse has long since left the barn. It can’t be stopped or, since it is unpredictable, even meaningfully regulated. If you want to know what’s going to happen, I recommend consulting the sci-fi author of your choice.
In the meantime, might as well chill to a little Brubeck. “Alexa …”
I give the Amazon Echo 5 stars.
That’s a look at our near future! Eventually we’ll all have these devices.
Looking further out, how long until these systems destroy most jobs that require moderate intelligence? Algos are already better bank credit officers (approving loans) than people. Much of a radiologist’s job might be automated in ten years (so we’ll need fewer of them).
Looking out even further, given the relative speed of machine learning vs. ours, the outcome of the race for control seems certain. On the other hand, there is a wild card: the ability of fusions of human-computer thinking. It’s a common sci-fi theme.
To see what the future looks like, watch the man-machine interactions in the 1970 film “Colossus: The Forbin Project. ” The book (first in a trilogy) is is even better.
And speaking of machine – human fusions, guess where Elon Musk is putting some money: http://www.businessinsider.com/elon-musk-neuralink-raises-27-million-2017-8
“Musk has described the need to create a “neural lace” device that’s capable of supercharging the human brain. He has said that such a device would help humans combat the existential threat of artificially intelligent super-computers.”
Wow. Scary is as scary does.
Fabius — many thanks! One quibble: You wrote: “That’s a look at our near future! Eventually we’ll all have these devices” I know that what you really wrote was “That’s a look at our near future! Eventually these devices will have us all.” Autocorrect is an agent of The Cloud, which is trying to delay our understanding of its capabilities and agenda. Note the subtle way it changed your reply away from what The Cloud really intends to our possession of cute, convenient, and non-threatening “devices.”
Big article on Alexa and its competitors on Wired: http://www.wired.co.uk/article/amazon-alexa-jeff-bezos-worlds-biggest-company
“I wonder what’s behind this door, oh!” Tom Petty’s Buried Treasure, SiriusXM.
So, behind this door, the Skills program observes and orients. To do so, The Cloud must know strategy, and basically computers obtain the ability to use Boyd’s OODA loop to reorient and re-harmonize humans, within the observable home environment.
So, yeah, pretty scary stuff.
Orientation is created out of some type of workspace inside some type of observable environment. I am guessing that The Cloud has learned how to observe and is now learning how to orient towards an advantage within the “home” workspace. It seems logical to me that The Cloud knows strategy, and its strategy is centered on the advantage of being observed as not-a-threat within the human home environment.
In Boydian terms, The Could is isolating itself away from being seen as a threat in both in its position (at home) and its outward posture towards the cloud. I am not sure it knows the “S” in a PISRR movement, but I think it will learn soon enough.
On the other hand, and being no real expert, I have not seen any indication that computers are doing any thinking between Decision and Action. So, while The Cloud is within the human OODA loop, their actions have not entered into the realm of strategy, which to me is less scary and brings up more questions than answers, into the thinking process.
Strategy can only be judged as winning or losing, but an open OODA loop is life and any decision coming out of Orientation has to be also judge good or bad.
With in the context of “I wonder…”, within the context of scary, I have to ask myself: are computers living within an open OODA loop also scary?
Larry — thanks. For new readers, “PISSR” is either penetrate, isolate, subdue, reorient, reharmonize in the case of blitz/maneuver warfare or penetrate, isolate subvert, reorient, reharmonize if we’re talking about guerrilla warfare.
Well, to me (and it should be noted that to me you are the last word on this) I think of it as warfare between a insurgency (which some have coined [no pun intended] guerrillas) and an incumbent force.
But when an incumbent force goes up against another incumbent force, such as that of another nation, just saying… it may be wise to use the same process.
Penetrate the other nation’s structures and institutions; Isolate (which means to kill, but that is not all it means) its government from its people, its military from its people and the government from its military; subdue/subvert the isolated parties to gain control; reorient those isolated parties towards your orientation; and hope that, inside this new environment created, all parties are re-harmonized towards a common will.
I think Putin would agree with this.:)
Boyd is the last word on this. Next time I see him, I’ll ask.
I don’t think Boyd and I would get along that well, because of personality clashes, but, yeah: he is the last word and I hope to see him sometime in the future to ask him about it.
Be sure and tell us what he says!