11 comments

  1. Asim Deyaf says:

    If I understand him correctly, he’s talking about a way to shorten the time
    between our thoughts and results.
    For instance: Pixar animators take weeks to animate a few seconds. With a
    “neural lace” interface, animators could simply imagine their animation,
    and the “neural lace” would convert those brain signals to character poses
    (instead of manually clicking and adjusting each character part one by one).
    Does anyone understand him differently?

  2. Kevin Martin says:

    Am I the only one that can tell Elon Musk already has the solution for the
    output medium and is being careful not to reveal it?

  3. John Smith says:

    Why is he dumbing it down? Digital selves? Really? Just say what it would
    really be, you will have new senses and limbs which reach into the digital
    environment and manipulate information in new ways. Frankly, once this
    becomes a reality anyone who abstains will be little more than a dumb
    animal, relatively speaking.

  4. TheAngryCanary says:

    nope. not enough. The idea that we can augment ourselves, and somehow keep
    pace with AI is absurd. Just to put it into perspective, think back to the
    first industrial revolution when machines replaced muscle power. Imagine
    the augmentations you would have to do to a horse to make it be able to
    keep up with a car. It’s just not possible.

  5. Dr. Zdenek Moravcik says:

    My vision of general AI is that it is already here and the future with it
    is bright.
    I have to know it because I am its inventor. I have general AI ready in my
    computer. It is ready for use in the industry and society.
    I am looking for robotic company willing to build first truly intelligent
    robots. Let’s make this world better!

  6. ChiaraDental says:

    I think this solution to the AI problem is an interesting one. Artificially
    enhancing human intelligence does seem to circumvent the problem of humans
    being subject to a non-human AI. But I think it also leads to some serious
    problems.

    People like E. Yudkowsky believe that it’s possible to make a perfectly
    ethical AI, one whose ethical system is essentially this: “Have the ethics
    that we collectively say we aspire to have”, or even better “Have the
    ethics that we would collectively would have if we were as intelligent as
    you”.

    An AI with this ethical system could be perfectly ethical. It could be
    kinder and more just than any person. Simply increasing the intelligence of
    human minds doesn’t create a perfectly just AI. It just improves the
    intelligence of ordinary humans, each with their current ethical systems.
    Imagine the destruction we could cause if we applied massively greater
    intelligence to all of our current conflicts.

    I think we should at least TRY to constrain any super intelligence we
    create with a sound ethical system. Enhancing human brains doesn’t
    immediately seem to address this issue.

Comments are closed.