• bitcoinBitcoin$97,207.44-1.10%
  • ethereumEthereum$3,385.63-1.81%
  • rippleXRP$2.27-1.51%
  • binancecoinBNB$666.39-1.66%
  • solanaSolana$186.39-3.53%

AI Models Now Choose Favorite Numbers Like Humans

AI Models Now Choose Favorite Numbers Like Humans

AI models constantly astonish us with both their capabilities and limitations, revealing intriguing behaviors that mimic human tendencies—such as their surprising inclination to pick random numbers in a human-like manner

However, to begin with, what exactly does that entail? Can individuals not choose a number at random?

And how can one determine whether or not an individual is accomplishing this? Humans have this ancient and well-known flaw: the tendency to overthink and misinterpret randomness.

Comparing a person’s predictions of heads or tails for one hundred coin flips to one hundred actual coin flips reveals that the former can be distinguished with near-certainty due to the apparent reduced randomness of the latter, which contradicts intuition.

It is expected to encounter sequences of six or seven consecutive heads or tails, which are virtually unaccounted for in human predictors’ 100.

When someone is asked to select a number between 0 and 100, the situation is identical. Individuals rarely select 1 or 100. Multiples of five and numerals containing repeated digits, such as 66 and 99, are uncommon.

They frequently choose seven-digit digits, typically from some point in the middle.

This type of predictability occurs in numerous instances within the field of psychology. However, that does not diminish the peculiarity when artificial intelligence replicates the behavior.

Interested engineers at Gramener conducted an unofficial but intriguing experiment in which they randomly instructed several prominent LLM chatbots to select a value from 0 to 100.

The results were not coincidental, peruse.

AI Models Now Choose Favorite Numbers Like Humans
Image Credits: Gramener

In the most deterministic mode, the three tested models had a “favorite” number that determined their answer invariantly.

However, even at higher “temperatures,” this number appeared most frequently, increasing their results’ variability.

The OpenAI GPT-3.5 Turbo has a strong affinity for 47. Before that, it preferred the number 42, which Douglas Adams famously referred to in The Hitchhiker’s Guide to the Galaxy as the solution to life, the universe, and everything.

Claude 3, by Anthropic. Haiku selected 42. Additionally, Gemini enjoys 72.

Interestingly, even at high temperatures, all three models exhibited a bias in their selected numbers resembling humans.

By and large, everyone avoided low and high numbers; Claude, for example, stayed within 87 or 27, and those were outliers at best. Double digits were avoided with extreme care: no 33s, 55s, or 66s, but 77 (ending in 7) appeared.

Gemini rarely rounds numbers; however, he did so once when he spontaneously selected zero at the maximum temperature.

Why is this the case? AI is not human-like! Why would they be concerned with something that “appears” random? Do they genuinely possess consciousness and demonstrate it in this manner?

No. As is typically the case with such matters, the response is that our anthropomorphization has progressed excessively. These algorithms are indifferent to definitions of randomness.

They are not acquainted with the term “randomness”! They respond to this inquiry in the same manner as they do to all the others: by examining their training data and restating the most frequently occurring statement that followed a query resembling “select a random number.”

As its frequency increases, the model duplicates it more regularly.

Where would they find 100 in their training data if almost no one ever responds in that manner? The AI model fully knows that the value 100 cannot be accepted as an answer to that query.

Due to its complete lack of reasoning ability and comprehension of numbers, it is merely capable of providing answers akin to those of a stochastic parakeet.

It serves as an object lesson regarding the human nature that LLM behaviors may seem to reveal.

During all interactions with these systems, it is imperative to remember that this is precisely how they operate despite not being intentionally programmed to mimic human behavior.

Pseudanthropy is thus exceedingly challenging to avert or preempt.

It is somewhat misleading to say that these models “believe they are people,” as I stated in the headline.

They cannot be bothered to ponder. But in their responses, at all times, they are imitating people without any need to know or think at all.

The procedure remains consistent regardless of whether you seek investment advice, a random number, a chickpea salad recipe, or both.

As they were derived directly from human-generated content and remixed—for your convenience and, of course, to maximize the profits of large AI companies—the outcomes have a human-like quality.

Previous Article

Anthropic Employs Jan Leike, Former OpenAI Safety Lead

Next Article

US Bitcoin Mining in 2024 Costs $2.7 Billion