webcomics
print
inventions
presentations
consulting

“…the emergence of a mind”

You can accuse him of hyperbole if you like, but I think James Gurney is exactly right. The fact that this sort of thing is happening all around us with increasing frequency is both fascinating and a little spooky.

There’s a sublime scene in the first Terminator movie where a hapless psychiatrist turns off a pager on the way out of a building just as Arnie’s cyborg from the future walks in. Most viewers see the link: how the tiny, primitive devices of today lead to something far more sophisticated (and in that context, sinister) in the future.

There are little fragments of AI in consumer devices that aren’t just incremental steps, they’re markedly different from what came before. They have the flavor of reasoning. Think of mobile apps for pulling songs out of the air, recognizing products from sight, or listening to and understanding our words, translating them.

These are the things in your pocket or just lying around the house right now. On the nightstand. On the kitchen table. They’re convenient and cool, but not much more for most people.

After all, it’s not like they’re walking or anything.

Right?


Discussion (23)¬

  1. RF Victor says:

    A little spooky, alright — but I think we can´t avoid Skynet, we can only avoid an EVIL Skynet. For some reason we MUST do this, we MUST get to the singularity (the emergence of the first true A.I – you know!). I don´t really think we can really explain WHY we´re doing this, but we are doing and will keep on doing.

    Our creators created us to… create, just as we already build machines that can create more of themselves…

    : )

  2. vinegartom says:

    The comments on the Big Dog link are hilarious in light of what you’re saying (e.g., these devices will be used to save people and explore places to dangerous for people and dogs-hilarious!). While I think we’re a long way off from Terminators (the govt has even promised not to arm robots with guns- yeah right), we are probably not that far off from true AI. The question will be then, will it launch nukes, terrorize us with some awful apocalyptic future, or something far more sinister? Like ratcheting up SPAM content, bombarding us with mobile phone apps that suck up all our time, and finding new and devious ways to cram ads into personal and private spaces? Self preservation as a primary self aware characteristic is a billion times more complex a task when you can analyze the possibilities faster than any newborn ever to have existed. The need might then be not to destroy us, but to maintain us, to herd us… just like they already do.

  3. I think it’s telling that most researchers who work on AI (myself included) do not believe we’re headed for self-aware, super-intelligent systems like Skynet and the Terminators. The fact of the matter is, we’re nowhere near getting computers/robots to think, reason, and problem-solve in the real world. It may seem eerie that the camera can recognize faces, but the mechanism that underlies it is not “markedly different” from what has come before.

    When AI first emerged as a field, people thought (or feared) that we’d have intelligent machines long before the new millennium dawned. We see how well that worked out. Thanks to the emergence of the internet and the huge increases in computing power, I think we’re in another era of irrationally exuberant AI predictions.

    • Scott says:

      Just for the record, I don’t really think that’s the world we’re heading for either — and I’d certainly not attempt to call out dates.

      But the phrase: “we’re nowhere near getting computers/robots to think, reason, and problem-solve in the real world” partially depends on your definition of “near” doesn’t it?

      In the whole sweep of human history, even a century would be pretty near, and though I’m obviously just an interested amateur, I sincerely doubt it will take that long.

  4. Michael says:

    I think we give ourselves too much credit. we make something that seems alive, and because it seems alive, we think it is. We are then alternately sppoked/impressed with our own intelligance when we haven’t really done anyhthing great. Since Genesis 3 , man has wanted to displace his Creator and put himself on the throne.

  5. Interesting example to choose for the emergence of the mind considering that the camera fails at distinguishing between real faces and pictures.

  6. Machines don’t, and never will, do something they’re not programmed to do. So unless our future SkyNet was given the “Become sentient, malevolent, and destroy humanity” line of code, we have little to worry about.

    Now, someone forgetting to program into the wheat-thresher robot the difference between a wheat field and a field full of school children… THAT is a more likely problem.

    • Well, you wouldn’t have to program “evil.” Once you programmed self awareness, machines could become evil on their own. And of course you always have the “3 laws” hypothetical where in order to save more human lives in the long term, machines institute martial law.

      • P.S. I don’t really think this post was meant to get people thinking about evil machines, but rather the issues that arise where computers “think” more and more, forcing us to ponder how to define terms like “thought” and “consciousness” apart from “algorithms” and “memory.”

        And it is a little spooky, not because of the potential for evil machines, but because it’s new. Just like the first film of an oncoming train caused the audience to panic and run from their seats. It takes time to fully understand the implications of new technologies.

        • Well, unlike humans with their billions of years of evolution behind the messed piles of meat they are, robots WILL be intelligently designed creatures. They will have nothing in their “sentience” that wasn’t placed there on purpose.

          • Or placed their accidentally, as in the aforementioned “3-laws” hypothetical in which a failsafe program carried out to it’s logical conclusion leads to robotic martial law. A robot who would kill one human to save millions might enslave thousands to save billions.

            Also, if we program “free will,” it doesn’t matter if we don’t tell a computer to be evil. It could still choose to be.

            And computers do things all the time they aren’t told to do. They malfunction.

            On the flip side, I like to fantasize about robots who are self-conscious and nerdy. Robots who are just as silly and messed-up as humans.

            • The philosophical and moral reasons for choosing to do good or evil would have to be programmed in. Plus it would require the information to tell what exactly evil is and what exactly good is, and how to tell the difference between the two. As well as how to put both choices into action. And what those actions could be. Not to mention they would have to have include a program that tells it to make the decision in the first place.

              Yes computers do malfunction all of the time. They crash when that happens. They don’t destroy humanity. Barring programmer malevolence, the worst that can happen with AI is an accident due to bad programming.

              You don’t get a big conflicting mess of emotions and motivations like we humans enjoy without the process of natural selection behind it. Perhaps if we were to mimic the same process of genetic trial and error replication, eventually we could get our free will murderous death robots.

  7. R> Maheras says:

    I think it’s pretty much a given that when machines can create themselves, think for themselves, have full mobility, and, through advances in AI, develop a sense of identity and self-preservation, the human race’s days will be numbered — or at least humanity’s place at the top of the food chain will be.

    Unless machines have some compelling reason to keep us around, they won’t. And it won’t be all that hard for them to get rid of us, as we are pretty fragile, mentally and emotionally weaker, and easy to kill, by comparison.

    But what the hell… I’ll be long dead by the time machines get to the point where they may become a threat to humanity.

    Hey… why is that Roomba circling my chair? No… no…. ahhhhhhhhh!!!!

    • Once a machine can think for itself, what’s to stop them from having the same personality flaws as humans? We might see lazy robots. Underachievers. Slobs.

      Also, who’s to say they act in unison? Sure, some might want to destroy humans, just like some groups of humans want to destroy other groups of humans, but maybe some machines might think humans’ are pretty cool.

      Either way, I think it’s fairly premature to speculate with certainty on the decisions of sentient robots.

  8. Jacob says:

    The comments on that video depress me. I mean why can’t people think more creatively? And if they really can’t think of any other uses for a walking quatro-pod, do they have to get so excited about it?
    If that thing gets a gun strapped to it it WILL take a life. At some point an innocent one. It doesn’t matter how cool and sci-fi esque the means are, it’s still killing people in cold blood.

    • Well, so does a grenade that is thrown into a room by a person.

      Can you really say a grenade or a robo-dog “kills in cold blood’? I think that adds a connotation that should be reserved for entities with agency.

  9. Will Curwin says:

    Big Dog still creeps me out, a little

  10. Jacob says:

    MM: Not my point. Perhaps a poor choice of words. My point is, allot of people see a robo-dog and they immediately start fantasizing about it blowing stuff up. I like Gundam and Robotech as much as the next nerdy manga fiend, but I try to draw the line at real life. What I was saying before is that no matter how cool and almost fantastical seeming something is, if it’s a tool for death that’s all it is, a tool for death. And it sincerely makes me sad that people can’t look at a technology, that nobody can deny is intuitive and state of the art, and not think of anything besides how it could be used as a weapon.

    • Will Curwin says:

      Im not creeped out because I think it will become a weapon, Im creeped out by how life like it is.

  11. I got ya. You’re right. But don’t worry. If I measured culture by the intelligence of people who posted on youtube, then I would have committed suicide a long time ago.