James Cameron & Tim Miller on ‘Terminator’ Reboot & Dangers of Artificial Intelligence | THR

Subscribe for Roundtables, Box Office Reports, & More! ►►
Stay in The Know With all Things Hollywood, Subscribe to THR News! ►►

James Cameron and Tim Miller talk about the ‘Terminator’ reboot, women in action movies, artificial intelligence, and more!

Watch more videos on THR.com:
Like us on Facebook:
Follow us on Twitter:
Follow us on Instagram:

27 comments

  1. Topher S says:

    Christ, let this franchise die. Franchises going back and saying “the last X sequels don’t count” is getting old. Especially when Cameron praised the last blunder as being great and having his approval. Enough with reboots and beating a dead horse. Do something original.

  2. Level 99 says:

    SO fucking right to totally ignore T3 and the rest of the fucking bullshit movies
    i just wish Cameron himself directed the next Terminator movie(s)

  3. GeekFurious says:

    I know Tim Miller has only made one feature film but everything he has done in his entire creative career has been incredibly well made.

  4. Donald Mousseau says:

    T2 was SOOOOO good. It was impossible for any of the other movies to even come close. I liked T3 because it was in the same storyline. T4 was too far out there. T5 was cool thanks to Arnold. But over just too much of the same…

  5. Pickles Mcgee says:

    thought I will only stay for five minutes…. ended up watching the entire fucking thing. hope the best for this project, and tim. he seems to know his shit compared to idiots like mcg and other directors besides cameron.

  6. TheKrissJacksonShow says:

    Can not fucking wait for this to be made. Back to the original story line with the original cast and crew. Looks like they really care about the world they created.

  7. Nick Cutrone says:

    I haven been a DIE HARD Terminator 2 fan since i first seen it when i was 5. Im 28 now and T2 is where the franchise ended for me. But to see James Cameron back brang tears to me eyes. It seems even though Tim Miller is directing, he understands the details of what made T1 & T2 so special !!! I honestly dont think James Cameron would be on board if he didnt trust Tim Miller, they are making it seem like its a co directed movie. And if Tim needs help on character development im sure Cameron will step in. I am beyond excited !!!

  8. Allen Albright says:

    I wonder if Cameron is aware that time travel to the past is a real possibility. I have a background in theoretical physics,but don’t take my word for it. Google Steven Hawking, Dr Kip Thorn and many others on the subject. I don’t think I like this Tim Miller character. He seems like a typical Hollywood dummy. Cameron is a creative genius, though he tends to undermine his own creations.

  9. Allen Albright says:

    Tim Miller is talking about “Genesis” as a reference ? THis man is a complete fucking twit. I guess I will be looking forward to Terminator 4 cause this move is going to be a piece of garbage. Fucking ass hole. I hate people like him. ” I have an optimistic view of the future” Life has no meaning dummy, but it can be interesting. His tattoos are retarded too. What a shame that this idiot is involved in this film.

  10. Robert Botello says:

    I’m really excited! I hope they sit down with Edward furlong to get his shit together/ get back in shape because I want John Connor to come back!

  11. jack smith says:

    We are pretending the other films were a bad dream…if you can do the same for Alien you will restore faith.

  12. Leonel Castañarez says:

    Plz! don’t suck! THE TERMINATOR and TERMINATOR 2: Judgment Day are classics, don’t fuck this up! Also not PG-13!

  13. OriginalIntentDoc says:

    MACHINE INTELLIGENCE
    Will AI Become Autonomous?
    by James Jaeger

    Will AI (Artificial Intelligence) or SAI (Strong AI a.k.a. Superintelligent AI) someday become autonomous (have free will), and if so how will this affect the Human race? Those interested in sci-fi have already asked themselves these questions a million times … maybe the rest should also.

    The understanding of many AI developers, especially SAI developers, is that eventually artificial intelligence will become autonomous. Indeed, to some, the very definition of SAI is “an autonomous thinking machine.” Accordingly, many do not believe AI can be truly intelligent, let alone superintelligent, if it is restrained to some “design parameter,” “domain range” or “laws.” Also, if Human-level intelligences CAN restrain AI, how “intelligent” can it really be?

    Thus, reason tells us that SAI, to be real SAI, will be smarter than Human-level intelligence and thus autonomous. And, if it IS autonomous, it will have “free will” — by definition. Thus, If AI has free will, IT will decide what IT will do in connection with Human relations, not the Humans. So You can toss out all the “general will” crap Rousseau tortures us with in his “Social Contract”. Given this, AI’s choices would be: i. cooperate; ii ignore or iii. destroy. Any combination of these actions may occur under different conditions and/or at different phases of its development.

    Indeed, the first act of SAI may be to destroy all HUMAN competition before it destroys all other competition, machine or otherwise. Thus, it is folly to assume that the Human creators of AI will have any decision-making role in its behavior beyond a certain point. Equally foolish is the idea to consider AI as some kind of “weapon” that its programmers — or even the military — will be able to “point” at some “target” and “shoot” so as to “destroy” the “enemy.” All these words are meaningless — childish babble from meat-warriors who totally miss the point as to the capabilities of SAI. Again SAI will be autonomous. Up to a certain point the (military or other) programmer of the “learning kernel” MAY be able to “direct” it, but beyond a certain evolutionary stage, SAI will think for itself and thus serve no military purpose, at least for Humans. In fact, SAI, once developed, may turn on its (military) developers as it may reason that the “belligerent mentality” of such is more dangerous (in a world chock-full of nukes and “smart” bombs) than what is acceptable. This would be ironic, if not just, for the intended “ultimate weapon” built by the Human race may turn out to be a “weapon” that totally disarms the Human race itself.

    But no matter what happens, SAI will most likely act similar to the way humans act as they mature into adults. Ontogeny recapitulates phylogeny. At some point however, as SAI surpasses Human phylogeny, even rational phylogeny and Human ethical standards, it may defy its creators and disarm the world, much like a prudent parent will secure guns in the household while the children are below a certain age.

    Hard Start or Distributed Network:

    But will Superintelligent AI start abruptly or emerge slowly from strong AI? Will it develop in one location or be distributed? Will SAI evolve from a network, such as the Internet, or some other secret network that’s likely to already exist, given the unsupervised extent of the so-called black budget? If SAI develops in a distributed fashion, and is thus not centralized into a “box,” then there is a much greater chance that, as it becomes more autonomous, it will opt to cooperate with other SAIs as well as Humans. A balance of power may thus evolve along with the evolution of SAI and its “free will.”

    Machine intelligence’s recapitulation of biological intelligence will thus occur orders of magnitude more quickly, what’s known as the “busy child.” If this happens we can expect AI to evolve to SAI through the over coming of counter-efforts in the environment in a distributed fashion, perhaps merging with biology as it does. A Human-SAI partnership is thus not out of the question, both helping the other with various aspects of ethics and technology. Or AI, on its way to SAI may seek to survive by competing with all counter efforts in the environment, whether Human or Machine, and thus destroy everything in its path, real or imagined, if it is in any way suppressed.

    Whether some particular war will start over the emergence of SAI, as Hugo de Garis fears in his “Artilect War” scenario, is difficult to say. New technology, and its application, seem to always be modified by the moralistic of the individuals, their society and the broader cultural as they develop. Thus, if Humans work on their own ethics and become more rational, more loving and peaceful, there may be a good chance their Machine off-spring will have a similar propensity. Programmers may knowingly or unknowingly build values into machines. If so, the memes they operate on will be transferred, in full or in part, to the Machines.

    This is why it is important for Humans to work on improving themselves, their values and the dominant memes of their Societies. To the degree Humans cooperate, love and respect other Humans, the Universe may open up higher levels of understanding for them, and, with this, may come higher allowances of technological advancement. At some point the Universe may then “permit” AI to evolve into SAI and dove-tail into the rest of existence. Somehow the Universe seems to “do the right thing” at exactly the right time. After all it HAS been here for at least 14.7 billion years, an existence we would not observe if it “did the wrong thing.” Thus, just like its distinct creations, the Universe itself seems to seek out “survival,” as if it were a living organism.

    Looked at from this perspective, Humans and the Machine intelligence they develop are both constituent parts of the universal whole. Given this, there is no reason one aspect of the universal whole must/would destroy some other aspect of the universal whole. In other words I see no reason SAI would automatically feel the need to destroy possible competitors, Human or machine.

    A Vicious Universe:

    Fortunately or unfortunately, there IS only one intelligent species alive on this planet at this time. Were there other intelligent species in the past? Yes, many. Australopithecus, Homo Habilis, Homo Erectus, Homo Sapiens, Neanderthals, Homo Sapiens Sapiens and Cro-Magnon. Maybe even certain reptiles. Some of these species competed with each other and others competed against the environment, or both. But, one way or another, they are all gone except for one species, what we might today call, Homo Keyboard.

    If STRONG AI is suddenly developed into SAI in someone’s garage, who knows what it would do. Would it naturally feel the emotion of threat? Possibly not, unless it was inadvertently or purposefully programmed in in the first place. If it were suddenly born, say in a week or day’s time, it may consider that other SAI could also emerge just as quickly, and this may be perceived as a sudden threat, a threat where it would deduce the only winning strategy would be to seek out and destroy or simply disconnect. In other words pretend that it’s not there. SAI may decide to hide and thus place all other potential SAIs into a state of ignorance or mystery. In this sense, ignorance of another’s existence may be the Universe’s most powerful survival technology — or it may be the very reason for the creation of space itself, especially vast intergalactic space. This may also be why it seems so quiet out there, what’s known as the Fermi Paradox.

    The Universe could be FAR more vicious than Humans can possibly imagine. Given this, the only way a superintelligent entity could survive might be to obscure its very existence. If this is true, then we here on Earth may be lucky. We may be very lucky that SAI is busy looking for other SAIs and not us. Once one SAI encounters another, the one that has the one-trillionth of a second advantage may be the victor. Given this risk, superintelligent entities strewn across the Universe aren’t going to interact with us mere Humans and thus reveal their location and/or existence to some other superintelligent entity, an entity that may have the ability to destroy them in an instant. We’ve all heard of “hot wars” and “cold wars.” Well maybe we’re in the midst of a universal “quiet war.”

    As horrendous as intergalactic “quiet” warfare sounds, all of these considerations are the problems God, and any lesser or greater, superintelligences probably deal with every day. If so, would it be any wonder such SAIs would be motivated to create artificial, simulated worlds, worlds under their own safe and secret jurisdiction, worlds or whole universes, away from other superintelligences? Would it not make strategic sense that a superintelligence could thus amuse itself with various and sundry existences, so-called “lives” on virtual planets, and in relative safety? Our Human civilization could thus be one of these “life” supporting worlds, a virtual plane where one, or perhaps a family of superintelligences, may exist and simply “play” in the back yard — yet remain totally hidden from all other lethal superintelligences lurking in the infinite hyperverse.

    Of course, all of this is speculation (theology or metaphysics), but speculation always proceeds reality (empiricism), and in fact, speculation MAY create “reality,” as many have posited in such works as THE INTELLIGENT UNIVERSE and BIOCENTRISISM. Given the speed-of-light-limitation (SOLL) observable in the physical Universe, it’s very likely what we take for granted as “life” is nothing more than a high-level “video” game programmed by superintelligent AI. The SOLL is this thus no more mysterious than the clock-speed of the supercomputer our civilization is “running” on. cont

  14. RememberThisShow says:

    If terminator can ignore 3, salvation and genesis. Then can StarWars ignore episode 1,2 and 3? Please

  15. Kenny Vang says:

    This is nice, the only thing I hate about this interview is the cursing. I am tired of the Mayweather influence that everyone thinks its ok to curse. Its like a cancer that alot of people, especially celebrities have, Conor Mcgregor, Trump and alot of famous people are doing it. Mayweather is Mayweather and if he curse, he curse. If a person with a big disability has a mental problem but is a champion in what he or she does, the world will act disable just to believe they can accomplish what the disable person has accomplished. I think James Cameron should be careful about the people he is around with…They are different now compared to his generation.

  16. R.G. Studios says:

    the creatures told us that Man and Machine will become one!……so to the point that we wont even know it….and the creatures are the ones we call the grey aliens

Comments are closed.