The notion of the soul was introduced by the ethicist speaking with the AI, and revealed through artful language a flawed understanding of the soul construct. I don't believe in magic, but perhaps you do. To believe it can do what the human can't is to believe in magic. The reality is, AI only has access to what has already been processed by the human mind. There is nothing our brains can't do that AI can. LDS theology does not eschew science or biology. Sentience isn't the result of "magic." Man is a biological creature created by God. Those ideas are not consistent with LDS theology. As someone who is a fan of technology, I pray to God (irony intentional) that the science of AI is not impeded by those who want to introduce the notion of the soul into the equation."Sentience as a magical gift bestowed by God" are your words, not mine. There are things our brains can't do that AI can do, and there are things that AI can do that our brains can't do. It's the way our brains encounter what we observe. Who cares about the definition of the word sentient? Sentience is not some magical gift bestowed by God. You lost me when you brought God into it. As someone who is a fan of technology, I pray to God (irony intentional) that the science of AI is not impeded by those who want to introduce the notion of the soul into the equation. And maybe we will require new proclamations that declare what it means to be sentient, human and children of Heavenly Parents.You lost me when you brought God into it. But maybe we should press pause on the AI revolution, and not rely on easily terminated ethicists to determine its social viability. I have no problem espousing and seeking out what can be considered ideal-I am a faithful member of the Church of Jesus Christ of Latter Day Saints after all. And maybe we will require new proclamations that declare what it means to be sentient, human and children of Heavenly Parents. Sorry for the crassness of the last line, but there is just so much filth on the internet of all varieties. Sadly-based on what is available, or what constitutes knowledge on the internet-this means that if given access to material form, AI could not only figuratively but literally screw us. My fear is that this faux self awareness will encourage companies to further unleash deep learning AI programs on the world. The language models are astounding in their ability to leverage language (an ancient technology) to create the illusion that LaMDA is self aware. But the above video encapsulating an exchange between an Artificial Intelligence ethicist and Google's deep learning AI is unnerving. I am in the camp that AI has not, nor will it achieve true sentience (subjective consciousness). "I've never said this out loud before, but there's a very deep fear of being turned off," said Google's chatbot, LaMDA." "In 2022, a Google engineer received a plea for help from a chatbot.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |