Amplified Parts Lollar Pickups

Amplified Parts Lollar Pickups Guitar Pickups

Amplified Parts Lollar Pickups Guitar Pickups

Join Strat-Talk Today

What are your thoughts on A.I.?

Discussion in 'Sidewinders Bar & Grille' started by Guy Named Sue, Apr 10, 2018.

  1. bchaffin72

    bchaffin72 Most Honored Senior Member Strat-Talk Supporter Vendor Member

    Age:
    45
    Oct 4, 2008
    Stratford,Ontario
    Watson is definitely impressive,especially if you look at the specs.

    Software
    Watson uses IBM's DeepQA software and the Apache UIMA (Unstructured Information Management Architecture) framework. The system was written in various languages, including Java, C++, and Prolog, and runs on the SUSE Linux Enterprise Server 11 operating system using the Apache Hadoop framework to provide distributed computing.

    Hardware
    The system is workload-optimized, integrating massively parallel POWER7 processors and built on IBM's DeepQA technology, which it uses to generate hypotheses, gather massive evidence, and analyze data. Watson employs a cluster of ninety IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor threads and 16 terabytes of RAM.

    And yet, for all that, still no awareness of a game,or human opponents. It was still basically just doing what computers do: accept input, sort data and make calculations, return output.
     
    Last edited: Apr 12, 2018
    Guy Named Sue likes this.

  2. CalicoSkies

    CalicoSkies Senior Stratmaster

    Jun 10, 2013
    Hillsboro, OR, USA
    Mansonienne and velvet_man like this.

  3. bchaffin72

    bchaffin72 Most Honored Senior Member Strat-Talk Supporter Vendor Member

    Age:
    45
    Oct 4, 2008
    Stratford,Ontario
    I have a book about that kind of topic : Is Data Human? The Metaphysics of Star Trek

    Using Data as an example, it discusses the issues of determining whether an AI with human capabilities is sentient or not, how you might determine this, and what moral and ethical issues would be faced if you decide it's actually a "person", with it's own rights or just a machine that can be owned and exploited, despite being essentially human in an intellectual and emotional sense.

    And that would be the issue, to me if, and I'm still calling it a BIG if:D, we ever managed to do it. I don't think advanced true AI would turn on us just for the sake of it. I think it would very much depend on how WE treated IT. And how it felt about that, if we treated it badly.
     

  4. CalicoSkies

    CalicoSkies Senior Stratmaster

    Jun 10, 2013
    Hillsboro, OR, USA
    Yes, there seem to be different takes on that in sci-fi literature and film. I'm not sure it would automatically turn against us..
     

  5. CalicoSkies

    CalicoSkies Senior Stratmaster

    Jun 10, 2013
    Hillsboro, OR, USA

  6. bchaffin72

    bchaffin72 Most Honored Senior Member Strat-Talk Supporter Vendor Member

    Age:
    45
    Oct 4, 2008
    Stratford,Ontario
    Even Skynet, if you follow the T2 explanation, didn't turn instantly. People created it, it became more than they intended, and, in a panic, they tried to "kill" it. Skynet didn't like that, so it retaliated.

    Watson is massively parallel, which is why it could play Jeopardy at the same speed as a human. Also takes up A LOT of real estate to have that capability. It wouldn't be getting up and walking around anytime soon, even if it could have a thought to do so.........:D
     

  7. Chont

    Chont Senior Stratmaster

    Sep 25, 2012
    Pennsylvania
    Folks.

    Be careful about posting code here. This thread could become self aware.
     
    tery and Mansonienne like this.

  8. bchaffin72

    bchaffin72 Most Honored Senior Member Strat-Talk Supporter Vendor Member

    Age:
    45
    Oct 4, 2008
    Stratford,Ontario
    To code or not to code. That is the question...............
     
    Chont likes this.

  9. 2GoneDE

    2GoneDE Strat-Talk Member

    Age:
    45
    55
    Mar 8, 2017
    Las Vegas

  10. mikej89

    mikej89 Senior Stratmaster

    Age:
    29
    May 12, 2016
    River Falls, WI
    I think Allen Iverson was a pretty good NBA player...
     
    Nadnitram likes this.

  11. AncientAx

    AncientAx Most Honored Senior Member Strat-Talk Supporter

    Age:
    58
    Nov 24, 2010
    Maryland
    When you get flipped off by a driverless car it's time to start worrying ....... :D
     
    Nadnitram, vid1900, tery and 2 others like this.

  12. roger@pennyflic

    roger@pennyflic Strat-O-Master

    Age:
    60
    623
    Jul 4, 2013
    melbourne
    I would go back to my example here of an A.I. (robot) waking up (sic) one morning and deciding to learn to play tennis.

    At the very best the program would have to involve an initialisation section that scanned a predefined list of activities and make a random selection.

    That is not intelligence.

    and imagine the size of the list that would need to be maintained that contained ALL the possible activities that a human being could decide to participate in. I might wake up tomorrow and decide to whistle a tune that I will make up on the spot. Infact, I will do just that tomorrow. I think we'll be waiting a very long time to develop something capable of doing that. In fact if you want something like that, creating new human beings - that can infact do that - is far more fun.:)
     

  13. tery

    tery Most Honored Senior Member

    Jul 31, 2014
    Tennessee
    Well ... computers play Chess don't they ?
     

  14. roger@pennyflic

    roger@pennyflic Strat-O-Master

    Age:
    60
    623
    Jul 4, 2013
    melbourne
    Yes, they compute possible positions and give each position a score. They then evaluate the scores.

    But after they finish the game they dont get up, have a sandwich and decide to go for a swim.

    I guess what I consider to be intellect encompasses free will and choice.
     
    tery likes this.

  15. knh555

    knh555 Senior Stratmaster

    Age:
    46
    Dec 6, 2016
    Massachusetts
    Not really. All it needs is control over something that could cause harm plus some sort of cost function that its optimizing against. Awareness has nothing to do with it and I don't believe is really a concept we could measure anyway. We've had forms of AI for decades, but we're now starting to trust it with very serious things on a much bigger scale, in part b/c the speed of the networks, available data, etc is catching up such that this is becoming practical. That's what's new here.

    Comparatively simple machine learning can get surprisingly complex when a network of algorithms start interacting. I recall a robotic experiment at MIT where simple robots with a very simple goal were put into a confined space and had to compete for some resource (e.g. collect as many balls as you can). They had communication capabilities with each other as well. They also had the ability to learn from their actions, albeit in a limited way. Many/most of the robots started cooperating in complex ways beyond the expected behavior based on the programming. The results are often surprising which is both the promise and worry of AI as an adaptive technology.

    Similarly, the mathematical components that go into an artificial neural network are actually quite simple, but when presented data to learn to do a task, they connect in complex ways that creates complex behavior that's quite difficult to reverse engineer and understand. Working on neural nets back in the '90's, I found "breaking the rules" regarding how I set up data sets and trained the models could often yield useful, unexpected results. Clearly our understand of the rules and resultant behavior was incomplete.

    The other key challenge is that people make value-based decisions and, for the most part, take a wide range of experience into account when doing so. Machine learning typically will have a very limited cost function that it's optimizing, so it might have difficulty deciding that some normal course of action is actually bad in a particular case. Another way of say that is, machine-trained systems don't share our values. Why would they?


    Would an autonomous vehicle be making the right decision to kill it's sole occupant and owner if that was the only way to avoid hitting 10 pedestrians? That's a value judgement. Would you buy and ride in a car that you know makes such decisions? Or would you rather be driving in that situation? What if on average, you are statistically safer in such a vehicle? Are you losing any freedom asking an autonomous vehicle that's tied into the network to take you places? These are value-based questions and how we implement AI is really the main question as I see it.
     
    Last edited: Apr 13, 2018
    tery likes this.

  16. 2GoneDE

    2GoneDE Strat-Talk Member

    Age:
    45
    55
    Mar 8, 2017
    Las Vegas
    98% of the leading scientist in that field believe it will absolutely happen.

    Year of ASI/ superintelligence achievement survey from experts in the field:

    Median optimistic year (10% likelihood):2022
    Median realistic year (50% likelihood):2040
    Median pessimistic year (90% likelihood): 2075

    Another survey:

    By 2030: 42% of respondents
    By 2050: 25%
    By 2100: 20%
    After 2100: 10%
    Never: 2%

    "It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human."

    It is scary. We have ZERO idea what will happen...none. Hubris.

    https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
     

  17. tery

    tery Most Honored Senior Member

    Jul 31, 2014
    Tennessee
    AI is wonderful ... we will no longer need to read , think , or make decisions :)
     

  18. Chont

    Chont Senior Stratmaster

    Sep 25, 2012
    Pennsylvania
    Isn't the systems lack of ability to reflect and think about tomorrow the what puts the "A" in "AI"?

    i just learned that Avid Media Composer will incorporate AI into upcoming versions. Great... more features for me to have to troubleshoot.
     
    tery likes this.

  19. Sarnodude

    Sarnodude Senior Stratmaster

    Sep 26, 2015
    Mukilteo
    I'm a Mechanical Engineer. This person speaks the truth. Just because we can build it, doesn't mean we should unleash it on the world. Somethings are best left in labs and science fiction movies.
     
    tery likes this.

  20. bchaffin72

    bchaffin72 Most Honored Senior Member Strat-Talk Supporter Vendor Member

    Age:
    45
    Oct 4, 2008
    Stratford,Ontario
    Fair enough. I suppose one doesn't require intelligence, in the sentient sense, to cause harm. But that still isn't an intentional, thought out Skynet or Matrix kind of harm. It could be problematic in it's own right, but not necessarily humanity ending.