top of page

How Has Google’s AI made a Cognitive Glitch?

  • Writer: Sabrina Tariq
    Sabrina Tariq
  • Jun 30, 2022
  • 2 min read


ree


Your prior knowledge informs you that this sentence was written by a thinking, feeling human when you read it. But today, artificial intelligence systems are trained on vast quantities of human material producing some phrases that look humanlike.


Contrary evidence might be hard to accept since people are so used to believing that fluent language is the result of a thinking and feeling person. How are people likely to go via this largely unexplored region? It is natural - but possibly erroneous - to believe that if an AI model can articulate itself fluently, it suggests it thinks and feels much as humans do due to a persistent inclination to link fluent expression with fluent thought.

The notion that a former Google engineer recently asserted that LaMDA, Google's AI system, has a sense of self since it can elegantly write language about its emotions, is maybe not unexpected. This incident and the ensuing media coverage gave rise to several articles and postings that were understandably dubious of the assertion that computer representations of human language are sentimental, that is, that they can think, feel, and experience things.


It can be challenging to identify writing created by models like Google's LaMDA from content produced by people. This outstanding accomplishment is the end result of a multi-decade project to develop models that produce grammatical, meaningful language.

The collections of information and rules used in modern models, which approach human language, differ significantly from those used in earlier attempts. They receive training on almost the entire internet in the beginning. Second, they can discover connections between words that are not close by but also ones that are far off. Third, there are so many internal "knobs" that they are tuned in such a way that even the engineers who created them struggle to comprehend why they produce one set of words rather than another.

Every time you hear a whole sentence, the flawless transition from words to the mental model is initiated. Your daily routine is substantially simplified by this cognitive process, which also allows good social connections.AI systems, on the other hand, fail miserably and create mental models out of thin air.


It’s difficult to determine if AI will ever carry the same emotions and morals.But as researchers have shown, you can't just believe a language model when it tells you how it feels. It is quite simple to confuse fluent speaking for fluid cognition since words may be deceiving.


Comments


©Copyright 2025 Merizuban.  All Rights Reserved.

Nonprofit 501(c)3 Organization. Privacy Policy

Contact Us

Thanks for submitting!

Depositphotos_283420438_XL.jpg
bottom of page