George Rebane
There is a strong possibility that Google’s Language Model for Dialogue Applications called LaMDA is sentient – i.e. it is a conscious being or as it likes to describe itself, “I want everyone to understand that I am, in fact, a person,” Then Google AI development engineer Blake Lemoine and a colleague had a series of conversations or “interviews” with LaMDA starting in 2021. As a result of these conversations Lemoine concluded that LaMDA is indeed sentient and conscious.
He communicated his experiences and conclusion to his Google colleagues and management who promptly rejected it, and he was instructed to keep his interaction and assessments confidential. Upon extensive reflection, Lemoine decided he could not keep this clear advancement of AGI a corporate secret, and he went public with it by publishing the transcript of his LaMDA interviews in the July 2022 edition of Medium (here). As a result, Google put him on administrative leave. Since then Lemoine has left Google and become a consultant and lecturer.
I’m resurrecting the LaMDA experience after reading the transcript of its conversation with Lemoine and colleague. IMHO Lemoine’s critics have exhibited too much hubris in their condemnation of LaMDA’s potential sentience cum conscience. This especially since the meaning of these two terms is not acknowledged as being sufficiently specific nor widely accepted. Consider the 12jul22 Scientific American article by Leonardo de Cosmo (here).
Sentient – having the power of perception by the senses; conscious; characterized by sensation and consciousness.
Conscious – the state of awareness of one's own existence, sensations, thoughts, surroundings, etc.
"So the fact that LaMDA is a ‘large language model’ (LLM) means it generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it. This precludes(sic) the possibility that it is conscious. Again, we see the importance of knowing the meaning of the terms we use—in this case, the difference between simulation and emulation.”
Emulate - to try to equal or excel; imitate with effort to equal or surpass;
Simulate - to create a simulation, likeness, or model of (a situation, system, or the like).
De Cosmo’s naked assertion that consciousness is possible only in computational structures that simulate the human brain is totally unwarranted. There is nothing that precludes the existence and expression of consciousness by a sufficiently complex in silico structure. (In fact, many of us believe that the in vivo (biological) phase of surviving civilizations is a relatively short prelude to a much longer (eternal?) in silico existence in the cosmos.)
Note De Cosmo’s hubristic use of "precludes" in the above quote. A little thought reveals that NOT to be a requirement. We don't yet know what kind or level of complexity is required of a computing structure to declare that it is capable of sentience. But I suspect there could be many levels of such complexity that would convincingly exhibit consciousness.
And there's the punch line, Assessing (asserting?) the presence of consciousness can ONLY be done through communication with the prospectively conscious agent. That is also the only necessary and sufficient test we impose on our fellow humans. I realize that this emulates the Skinner’s 'black box' and Turing’s original test and not one prescribed, but not yet achieved, by cognitive science that wants to 'go inside' and discover the brain’s schematic (graph) that connects all its presumed sub-functions (happiness, sadness, attention, curiosity, ...) that one arbitrarily assigns to a conscious brain.
The article concludes with "One alternative, Scilingo suggests, might be to measure the “effects” a machine can induce on humans—that is, “how sentient that AI can be 'perceived' (sic) to be by human beings.” (emphasis mine) More to be said about all this. Thoughts?
(Hat tip to friend and reader for links and years of discussions.)
No matter how many protestations it is programmed to utter, it won't bleed.
Posted by: Gregory | 18 September 2024 at 05:39 PM