View Poll Results: Should we build machines more intelligent than human beings if we could?

Voters
9. You may not vote on this poll
  • Yes, we should

    5 55.56%
  • No, we shouldn't

    0 0%
  • I don't know

    0 0%
  • The question doesn't make sense

    4 44.44%
Page 4 of 4 FirstFirst ... 234
Results 31 to 38 of 38

Thread: Dilemma: Should we build machines more intelligent than human beings if we could?

  1. Top | #31
    Contributor Speakpigeon's Avatar
    Join Date
    Feb 2009
    Location
    Paris, France, EU
    Posts
    6,314
    Archived
    3,662
    Total Posts
    9,976
    Rep Power
    46
    Quote Originally Posted by DBT View Post
    Quote Originally Posted by Speakpigeon View Post
    Quote Originally Posted by DBT View Post
    Would a Super Intelligent Machine/Entity cooperate with fuckwit humans engaging in what It clearly sees as being fuckwit behaviour?
    We can assume that the machines will still be used by humans and not the reverse.

    There would be no difficulty in making intelligent machines devoid of any intention, as are our current desktop computers. They would reply to our queries and do our bidding and no more.

    We could still have killer AI robots but these would have to be designed and build specifically for this purpose by some fuckwit military. But even then, the killer robot would only try to suppress human-designated targets.

    Although, we would have to make sure we don't unintentionally designate the whole species as legitimate target. But who would do that?

    Ah, yes, fuckwits.
    EB
    Presumably an intelligent system allows adaption and self modification in order to respond to changing conditions. If so, no matter that we make intelligent machines devoid of any intention, the machine may develop intention during the course of its own self development and evolution as an intelligent entity, consequently we lose control of the machine. As you said, we are not talking about machines that are only a bit smarter than us.
    No, I am assuming these machines would be more intelligent but would be incapable of evolving. Intelligence doesn't give the capacity of evolving. Intelligence allows you to evolve new intelligent ideas, not new intentions or behaviours. Let's assume simple desktop computers, just vastly more intelligent than us. At most, we may decide to use them as killer robots but even then, we could limit their range of behaviours, with no intentionality and no capacity to evolve one.

    What then? Good? Bad? And why?
    EB

  2. Top | #32
    Contributor DBT's Avatar
    Join Date
    May 2003
    Location
    ɹǝpunuʍop puɐן
    Posts
    9,075
    Archived
    17,906
    Total Posts
    26,981
    Rep Power
    71
    Quote Originally Posted by Speakpigeon View Post
    Quote Originally Posted by DBT View Post

    Presumably an intelligent system allows adaption and self modification in order to respond to changing conditions. If so, no matter that we make intelligent machines devoid of any intention, the machine may develop intention during the course of its own self development and evolution as an intelligent entity, consequently we lose control of the machine. As you said, we are not talking about machines that are only a bit smarter than us.
    No, I am assuming these machines would be more intelligent but would be incapable of evolving. Intelligence doesn't give the capacity of evolving. Intelligence allows you to evolve new intelligent ideas, not new intentions or behaviours. Let's assume simple desktop computers, just vastly more intelligent than us. At most, we may decide to use them as killer robots but even then, we could limit their range of behaviours, with no intentionality and no capacity to evolve one.

    What then? Good? Bad? And why?
    EB

    AI has been defined in different ways, depending on who you ask. I wouldn't define desktop computers, et al, as being intelligent. Not in the way we are presumably talking about; the ability to think and reason that is orders of magnitude above human capability.

    Merriam-Webster defines artificial intelligence this way:

    A branch of computer science dealing with the simulation of intelligent behavior in computers.
    The capability of a machine to imitate intelligent human behavior.''

    The Encyclopedia Britannica states, “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” Intelligent beings are those that can adapt to changing circumstances.

    Definitions of artificial intelligence begin to shift based upon the goals that are trying to be achieved with an AI system. Generally, people invest in AI development for one of these three objectives:

    Build systems that think exactly like humans do (“strong AI”)
    Just get systems to work without figuring out how human reasoning works (“weak AI”)
    Use human reasoning as a model but not necessarily the end goal

    Turns out that the bulk of the AI development happening today by industry leaders falls under the third objective and uses human reasoning as a guide to provide better services or create better products rather trying to achieve a perfect replica of the human mind.''

  3. Top | #33
    Contributor Speakpigeon's Avatar
    Join Date
    Feb 2009
    Location
    Paris, France, EU
    Posts
    6,314
    Archived
    3,662
    Total Posts
    9,976
    Rep Power
    46
    Quote Originally Posted by DBT View Post
    AI has been defined in different ways, depending on who you ask. I wouldn't define desktop computers, et al, as being intelligent. Not in the way we are presumably talking about; the ability to think and reason that is orders of magnitude above human capability.

    Merriam-Webster defines artificial intelligence this way:

    A branch of computer science dealing with the simulation of intelligent behavior in computers.
    The capability of a machine to imitate intelligent human behavior.''
    Well, that's interesting, but the word "behaviour" is obviously surplus to requirement.

    Quote Originally Posted by DBT View Post
    The Encyclopedia Britannica states, “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” Intelligent beings are those that can adapt to changing circumstances.
    The last bit is off. And Encyclopedia Britannica (not the same as EB, by the way), doesn't say as you seem to say that "intelligent beings are those that can adapt to changing circumstances".

    Here is the entire quote:

    Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. https://www.britannica.com/technolog...l-intelligence
    The page only mention "intelligent beings" in this one sentence, so Encyclopedia Britannica talks of "tasks", not of adaptation to changing circumstances. Intelligence implies adaptation, yes, but a limited kind of adaptation. That is, it is the reasoning which is adapted to circumstances, not necessarily the machine itself or the behaviour of the machine, as suggested by Forbes. We could have machines more intelligent than us that would nonetheless be unable to adapt their behaviour, and therefore adapt their behaviour to changing circumstances. This would be something surplus to intelligence. Also, intelligence is not what allows human beings to evolve for example. So, we should keep these different notions distinct.

    Quote Originally Posted by DBT View Post
    Definitions of artificial intelligence begin to shift based upon the goals that are trying to be achieved with an AI system. Generally, people invest in AI development for one of these three objectives:

    Build systems that think exactly like humans do (“strong AI”)
    Just get systems to work without figuring out how human reasoning works (“weak AI”)
    Use human reasoning as a model but not necessarily the end goal

    Turns out that the bulk of the AI development happening today by industry leaders falls under the third objective and uses human reasoning as a guide to provide better services or create better products rather trying to achieve a perfect replica of the human mind.''
    Yes, of course, and the main reason for that is that we have been unable to model human reasoning properly so far. If someone did it, the situation would immediately change and most people would turn towards designing strong AIs.
    EB

  4. Top | #34
    Contributor
    Join Date
    Aug 2003
    Location
    South Pole
    Posts
    9,734
    Archived
    3,444
    Total Posts
    13,178
    Rep Power
    70
    You can't stop them from being built if they can be built, so it's a moot point whichever stance you take.

  5. Top | #35
    the baby-eater
    Join Date
    May 2011
    Location
    Straya
    Posts
    3,794
    Archived
    1,750
    Total Posts
    5,544
    Rep Power
    37
    I think we should build machines more intelligent than us, just to see what happens.

    We can speculate about whether we're going to get a Skynet, Helios or Wintermute, but it doesn't really matter, since all of these outcomes are more interesting than not even trying.

  6. Top | #36
    Contributor Speakpigeon's Avatar
    Join Date
    Feb 2009
    Location
    Paris, France, EU
    Posts
    6,314
    Archived
    3,662
    Total Posts
    9,976
    Rep Power
    46
    Quote Originally Posted by bigfield View Post
    I think we should build machines more intelligent than us, just to see what happens.

    We can speculate about whether we're going to get a Skynet, Helios or Wintermute, but it doesn't really matter, since all of these outcomes are more interesting than not even trying.
    I'm not sure I would have wished to see what the Nazis where going to be doing. Interesting? I would have said it is interesting to talk about it first, and possibly then decide whether we want to let them access the Reich's government.

    It you think it would be interesting to see, why is it not even more interesting to discuss the possibilities first? Maybe you'd prefer different Sci-Fi films explore the different possible scenarios, perhaps with some tin-box in the lead role?

    And I don't even know what' Helios or Wintermute! Where have I been, you know?
    EB

  7. Top | #37
    Contributor Speakpigeon's Avatar
    Join Date
    Feb 2009
    Location
    Paris, France, EU
    Posts
    6,314
    Archived
    3,662
    Total Posts
    9,976
    Rep Power
    46
    Quote Originally Posted by Jolly_Penguin View Post
    You can't stop them from being built if they can be built, so it's a moot point whichever stance you take.
    Oh, whoa, this is seriously pessimistic credo.

    Of course we could stop them if we want to.

    For example, we could build super-intelligent robots that we would let loose with the unique task of arresting anyone trying to build super-intelligent machines.
    EB

  8. Top | #38
    the baby-eater
    Join Date
    May 2011
    Location
    Straya
    Posts
    3,794
    Archived
    1,750
    Total Posts
    5,544
    Rep Power
    37
    Quote Originally Posted by Speakpigeon View Post
    And I don't even know what' Helios or Wintermute! Where have I been, you know?
    EB
    So sorry, let me explain:

    Skynet is an DARPA prototype for a super-high-altitude airship (SHAA) that can communicate with mobile radio antennae via tightbeam. It is intended to be an autonomous command system (ACS) that acts as a fallback in the event that the US government is destroyed by a decapitation strike.

    Helios is a merger of two earlier AI projects, Daedalus and Icarus, which in turn evolved from the NSA's Echelon IV network. It's associated with conspiracy theoroes (Illuminati, Roswell etc.), but according to the white paper it's just used to analyse smartphone metadata and predict terrorist attacks.

    Wintermute is rumoured to be an AI under development by Pornhub designed to manage the marketplace for their new cryptocurrency, Clitcoin.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •