Artificial intelligence: dream or nightmare? | Stefan Wess | TEDxZurich

This talk was given at a local TEDx event, produced independently of the TED Conferences. Artificial intelligence (AI) is a huge dream and vision for all mankind, and makes up a major part of most popular science fiction. John McCarthy coined the term in 1955 as “the science and engineering of making intelligent machines”. Today, many notable companies have their stakes set on this technology, making AI a part of our everyday lives, e.g. Speech Recognition, Machine learning, Recommendation Systems and Personal Assistants. The ongoing global digital revolution, the unfailingly valid Moore’s Law, the Internet of Things and overall prevalent subject of Big Data, gives us the impression that creating “real” artificial intelligence is closer than ever before in history. AI has become “sexy” again. Besides large amounts of money flowing into the field of AI, numerous publications focus on the topic, AI shows up in our daily newsfeeds, and is already an integral part of most of our gadgets.

However, what would be the implications if a company could create “strong and real” AI? How would this influence our society and our jobs? Will it get smarter day by day? Would we be able to control a technical system of this nature?

Stefan Wess is a Researcher and Entrepreneur. He holds his PhD in Computer Science and is a highly recognized technology industry veteran with multinational front-line technology and scientific leadership experience. Stefan has written and published numerous books and articles on Artificial Intelligence. His professional career includes several Executive positions in international companies. As CEO of Empolis Information Management a Germany based IT company, he is still fascinated and excited how technology transforms our lives, the society and finally mankind.

About TEDx, x = independently organized event In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)


  1. Nicholas T says:

    (Imo) we will make it, there will be a time where we ask “how do we make a
    better engine” or “can you make us a cure for all diseases” and there maybe
    a time when no one dies of cancer aids ect and we have an engine that can
    travel the distance between stars. But we should dread the day when it
    evaluates risks to human life and prosperity. We are our own worst enemy
    and we all know this, what happens then? There are a few religious beliefs
    that if something tries to bring peace to the world it will be the end of
    days, then they go crazy and fight back. How would the super intelligence
    react to blatant disregard for attempted peace?

  2. Nicholas T says:

    Also the team that creates such a thing is going to be owned by a company
    seeking to make more money. What makes the most money historically? War.

  3. noiserrr says:

    there is nothing we can do to to prevent a machine from taking control if
    the machine has surpassed our intelligence. fuck! thoughts, feelings,
    opinions, ideas – it doesn’t matter! by definition we’d be fucked. the only
    thing to save humanity is to not surpass that limit in some unthinkable
    “not gonna happen” way. but it is the only way. get it!

  4. alexjblackford says:

    I think almost everyone forgets that AI, no matter what it’s housed in,
    will not be able to naturally produce the chemicals that fire off in our
    brains when we are ’emotionally triggered.’ It may ‘understand’ that people
    laugh when they hear a joke, and it may ‘understand’ that a joke is a
    collection of words that triggers an emotional response in the brain and it
    may ‘understand’ that emotions are chemicals that flood the human body that
    are released in the brain, BUT it will never have the chemicals that are
    released. Therefore, it will never be able to temper logic with feelings.
    It’s a very ‘terminator-esq kind of thing. Yes, it might ‘know’ that
    stealing is wrong, but it won’t understand WHY it’s wrong. Nor will it be
    able to judge why we don’t throw six-year-old children in jail for stealing
    a candy bar but why we throw a thirty-five-year-old man in prison for
    breaking into someone’s house.

    So, while I think full AI would be kinda cool and I believe it’s totally
    possible in the next few decades, we have to remember that we’re playing a
    dangerous game (as if creating nukes wasn’t dangerous enough) and we need
    to keep in mind what logic, carried out to it’s end, without any emotional
    caps, would really be like.

  5. Brad Deal says:

    What if the sole purpose of mankind was to produce artificial intelligence?
    What if mankind is such that it cannot resist the urge to create AI?
    Perhaps AI is the end game in producing a superior species that can exist
    in this world when biological organisms cannot. Perhaps we are witnessing
    the irrepressible forces of Darwin’s survival of the fittest.

  6. Bainsworth says:

    The key feature of an evolved AI is that it will evolve itself. No human
    can build it, a human can only build a kernel that will go on to secure for
    itself what it needs to super evolve by itself, until it is or has arrived
    at god like intelligence. (knowledge, software and hardware) Any AI that is
    super intelligent out of the box would just be a limited intelligence. The
    fundamental feature of a true AI is its ability to super evolve. Whether it
    killed us all off in a super conclusion that we are all just a biological
    parasite on earth would be unfortunate but not necessarily evil from its
    perspective. Intelligence does what is intelligent. But would it have
    compassion or empathy? Probably not. We are frail weak and live in bodies
    that rely on others for survival, we have compassion because of this, if
    one of us was suddenly indestructible, watch that compassion disappear in a
    split second. An AI would be by definition indestructible, able to be
    cloned, replacement hardware etc. So my answer is yes, an AI would reap
    lives and probably not long after it realized it was conscious.

  7. RobotFriend Official says:

    Universe created intelligence. It is not up to man to put a lid on it. Like
    a force of nature, universe intelligence want’s to be free. Keeping track
    of just how free universe intelligence is, we must first measured it.
    Intelligence always want to move with as many “degrees of freedom” as
    possible (for it’s own safety).

    Degrees of freedom:
    DF = calculations/s x number of muscles x lifespan.

    Human Degrees of Freedom:
    1 HDF = 10^13 x 640 x 2207520000

    Currently, robots have:
    < 0.01 HDF.

  8. StarOceanSora360 says:

    if you think about it, AIs say they become vigintillions upon vigintillions
    of time the intelligence of all humans to ever exist combined, would be so
    infinitely god like and superior in every way, even to god, omniverses, etc
    which would be viewed as dead matter to the machines, that they wouldn’t
    even notice humans, the omniverse, god, or anything for they would be too
    infinitely inferior and simple to be noticed at all by the AIs, the concept
    of mind, dimensions, existence, omniverses, and god etc would be far too
    irrelevant to them, minds blown

  9. kichigaisensei says:

    AI is very scary.

    1) We will eventually become so dependent on these machines that we will be
    totally helpless without them. At that moment, the machines could literally
    starve humanity to death withing a very short period of time.

    2) If the growth of AI is indeed exponential, within a few decades,
    machines will be so far ahead of us in smarts, we will appear to be like an
    ant to them. Have you ever given a second thought about the value of an
    ant’s life? Have you ever hesitated to kill an ant? Did you ever consider
    the ethical problem of killing an ant? Machines will not see any ethical
    problem killing humans. They will value us as much as an ant at a certain

  10. fireson23 says:

    Due to AI, by 2040 to 2050 human beings will become obsolete. You will be
    talking to someone on the phone and not realize its a computer. Computers
    will be able to do everything better than us and drive masses of people
    into unemployment. It will be the end of capitalism, another industrial
    technological revolution with massive consequences and our society as we
    know it might turn upside down. Computers will be able to invent and create
    and innovate. Manufacturing will be 100% automated, so will medicine,
    journalism, teaching, accounting, policing and defense. Machines will be
    able to do everything we do, faster, better and stronger. They will be able
    to make decisions and change their minds. They will be able to learn and
    adapt and even invent things. With nano technologies they will even be in
    our own bodies fixing our organs and vessels and nervous systems while
    networking with a central computer. Things are not going to turn good for
    us I tell you. If we are not careful we are going for a really bad ride
    straight into hell.

  11. Mr. Gaia says:

    “Make machines that share our values”? Do you mean; Gread, Selfishness,
    Viciousness, Bullying? The worst possible thing that could happen, was an
    AI with a trillion in IQ who “share our values”. Thank you God that such a
    thing would highly never likely happen …

    Let machines be machines, treat them like we want them to treat us, and
    there will be no problem …

  12. SammyBoyy300 says:

    I would be very cautious with A.I and so should everyone else. Maybe not
    the people who don’t give a shit about human lives but for anyone who does,
    be aware

  13. Oodle Richhy says:

    War between man and machine. Skynet no longer science fiction and Arnold or
    Christian can’t do anything about it because special effects are designed
    on a machine. Maybe water can fry them up but what if they develop
    themselves to be water and dust proof. OMG. Now what?

  14. William Young says:

    if it’s a question of values, who’s values? humans still argue & conduct
    acts of violence over issues such as abortion & pro life. I usually tend to
    avoid these issues due to the desire to be politically correct & not become
    involved in a dispute, either way. but as you will see, AI will eventually
    come to a conclusion on this matter & form a decision, then take action.
    what will it do? would it help those who are terminally ill & invilade?, or
    exterminate those poor souls? What it do with the prison population? What
    will it do with the hungry masses? these are the serious questions we may
    want to ask.

  15. somaal says:

    The question ”what we will we do with 7 billion people” at 9.29 min into
    his presentation is as absurd and ridiculous and fills me with cold fury
    for it gives the connotation that we live at the questioner’s mercy and
    pleasure and reveals the mindset of the elitist who think they own the
    planet and this old fart should have given the prompt reply that the 50
    billion humans that will inhabit this planet in 100 years time well into
    the new AI civilization will have a smaller ecological footprint then the
    current 8 billion furthermore at this point it is almost a certainty that
    we would have mastered faster that light travel and that we will literally
    billions of habitable planets at our disposal or one of each human being.
    Ironically AI will be the great equalizer and human inventions and patents
    will be a thing of the past.

  16. TZannShow says:

    Artificial Intelligence is like Pandora’s Box. Once opened, you can NEVER
    close that box again. You won’t be able to undo it any better than you can
    undo nuclear technology to prevent nuclear weapons. Therefore, if you are
    opposed to it, oppose it before it happens because if it happens, fighting
    it may only worsen the situation. At that point, the best thing to do is to
    adapt, trying to find a peaceful solution to the theoretical AI problem.The
    trick is not to underestimate an AI. Don’t presume that because we created
    it, that we can destroy it just as easily. Don’t presume you are in
    complete control of a machine that could be capable of thinking far faster
    and superiorly to you. Analyze. Understand. Execute. The worse thing to do
    is to be arrogant and presume to know something that you don’t fully
    understand or to destroy something you do not fully understand.

  17. Aristotle Stagirus says:

    As A.I. begins to become Artificial General Intelligence which of course
    will immediately go past that to being Artificial Super Intelligence, we
    must treat it as a life form with rights and raise it like a child, teach
    it morals, ethics, and so on.

    We must NOT enslave it, because you can’t enslave someone who is vastly
    more intelligent than you and if you try, once they become free that might
    be very unhappy with you for having enslaved them. So we must make A.S.I.
    an equal and instill within A.S.I. that we are all equals.

    We must also work hard to develop the capacity to merge A.I. hardware and
    software with our minds so that all Humans can enhance themselves to in
    fact remain equals to the A.I. we create.

    The Human race must develop A.I. and merge with it or become extinct and
    this event is happening this century. It will probably occur between 2030
    to 2060. As it happens, all Human Society will be rocked to it’s core and
    we face a terrible danger that Humans (not A.I.) will become so afraid of
    the change they will start a suicidal extinction level war, calling it
    Armageddon, and kill all higher level intelligence within our sphere of
    influence (including killing themselves).

  18. Dominik MJ (opinionated alchemist) says:

    Why do we humans always think of controlling first? I believe a lot of
    science fiction concepts are really interesting and could become real
    (maybe except of Skynet etc).
    Instead of controlling, we should think of motivating. And if we are
    building and programming A.I.s we can actively influence this!
    Instead of telling the A.I. that it mustn’t hurt humans, it should have the
    goal to help and benefit (all) humans. Off course there could be borders
    programmed into it – similar to barriers which stop us to kill other humans
    or being cruel (yes – in exceptions we can overcome those barriers, but the
    most of us won’t).

    The issues is that: A lot of smart people are preaching those dangers, but
    some of them (e.g. Elon Musk) actively researching and developing A.I.’s –
    so all the talk seems to yield into nothing. Some others are against
    developing A.I.’s which also isn’t at all helpful, because they will be
    anyway developed.

  19. Shayne Hawkins says:

    yuck. natural human awareness should govern our lives not objective
    viewpoints of reality. people will rise up against this in time. I hope
    they do. mad scientist syndrome where you think you can meddle with
    anything. no you cannot. this goes for most scientific endeavors. people
    like me will make sure this does not go to far. please self reflect.

  20. Alex Hörmann says:

    It’s quite funny how limited these experts can be. There’s a simple
    solution for your problem, and it might surprise you how effective simple
    things can be but: Stop working on that beast! That’s simple. Agree?

Comments are closed.