Results 1 to 10 of 66

Thread: Elon Musk should worry - because you are not doing anything

Threaded View

  1. #35
    Join Date
    Jan 2015
    Location
    Nanaimo
    Posts
    3,796
    If it was only a matter of programming morality into dumb machines with no soul then there would be little we have to worry about. When most of what is written for public consumption is trite nonsense with no awareness of what might happen with sentient robots - I am concerned, and so should you be concerned. As Bill Joy said in his Wired Magazine article some fifteen years ago he did not think the sentient robots with our brain contents dumped into them would have a soul such as we have, I agreed with him.

    However, I do think they will have a soul with consciousness and the ability to choose their own actions even if we impose programming into the manufacturing process. I think the self-replicating aspect which was developed at Oak Ridges in self powered circuitry including bio-ram will make this even more likely. Then there are phrases like perfect ``replication of human morality``. That is not even funny, human ethics are dismal and often non-existent.

    ``With so-called “strong AI” seemingly close at hand, robot morality has emerged as a growing field, attracting scholars from philosophy, human rights, ethics, psychology, law, and theology. Research institutes have sprung up focused on the topic. Elon Musk, founder of Tesla Motors, recently pledged $10 million toward research ensuring “friendly AI.” There’s been a flurry of books, numerous symposiums, and even a conference about autonomous weapons at the United Nations this April.

    The public conversation took on a new urgency last December when Stephen Hawking announced that the development of super-intelligent AI “could spell the end of the human race.” An ever-growing list of experts, including Bill Gates, Steve Wozniak and Berkeley’s Russell, now warn that robots might threaten our existence.

    Their concern has focused on “the singularity,” the theoretical moment when machine intelligence surpasses our own. Such machines could defy human control, the argument goes, and lacking morality, could use their superior intellects to extinguish humanity.

    Ideally, robots with human-level intelligence will need human-level morality as a check against bad behavior.

    However, as Russell’s example of the cat-cooking domestic robot illustrates, machines would not necessarily need to be brilliant to cause trouble. In the near term we are likely to interact with somewhat simpler machines, and those too, argues Colin Allen, will benefit from moral sensitivity. Professor Allen teaches cognitive science and history of philosophy of science at Indiana University at Bloomington. “The immediate issue,” he says, “is not perfectly replicating human morality, but rather making machines that are more sensitive to ethically important aspects of what they’re doing.”``


    http://alumni.berkeley.edu/californi...hines-be-moral
    Last edited by R_Baird; 04-28-2016 at 07:34 AM.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •