Can Brain Science Characterize Artificial Intelligence?

There is a new article in Nature, ChatGPT broke the Turing test — the race is on for new ways to assess AI, where the author wrote, “The most famous test of machine intelligence has long been the Turing test, proposed by the British mathematician and computing luminary Alan Turing in 1950, when computers were still in their infancy. Turing suggested an assessment that he called the imitation game. This was a scenario in which human judges hold short, text-based conversations with a hidden computer and an unseen person. Could the judge reliably detect which was the computer? That was a question equivalent to ‘Can machines think?’, Turing suggested.”

“Other researchers agree that GPT-4 and other LLMs would probably now pass the popular conception of the Turing test, in that they can fool a lot of people, at least for short conversations. Rather than the Turing test, researchers instead typically assess AI systems using benchmarks intended to evaluate performance on specific capabilities, such as language ability, common-sense reasoning and mathematical capacity. Increasingly, teams are also turning to academic and professional examinations designed for people. LLMs learn only from language; without being embodied in the physical world, they do not experience language’s connection to objects, properties and feelings, as a person does. LLMs also have capabilities that people don’t — such as the ability to know the connections between almost every word humans have ever written.”

The absence of a consensus for machine intelligence, in spite of their outputs in recent months, is not because they do not meet some criteria for human intelligence, but the expectation of biological mechanisms and a broad intelligence output are lacking.

Intelligence is generally characterized by outputs though mechanisms are a factor. IQ tests, fluid, crystallized intelligence, problem-solving, sophisticated planning ability and so forth are outputs of the label. Intelligence, as a useful capability, for an organism in a habitat is an expression of outcomes against predators, for preys and to survive. Intelligence is not commonly rated for being unused—whether the mechanisms are present or not.

There are always levels to intelligence, sometimes by how much of something a person knows, or how much another understands, though there are acceptable averages across subjects or situations.

How does the human brain mechanize intelligence? What are the pathways of intelligence for knowledge, understanding, syntax, semantics and so forth?

What is constant in the brain, for everything that produces intelligence? In that constancy, how do different partitions enable those outputs? What is the difference between intelligence and language? What is the difference between thinking and memory? Are emotions subsets of intelligence? Do feelings play a role? How are they all linked?

Conceptually, the most important part of the brain is impulses. The most important things about them are their interactions and features. Electrical and chemical impulses, interacting and with features of their sets, can be said to be the human mind.

It is from the features and interactions that everything the mind does is produced, making both direct mechanisms of intelligence. What are these features and how do impulses interact? The answers can place why understanding is different from knowledge, or why language is different from intelligence or why touch is different from sight, or why there are degrees of taste.

Intelligence helps to know, or the mechanisms make knowing possible for outputs that align with objectives. If intelligence is measured by outputs, it can be postulated that machines already possess a lot of intelligence, not as expansive as the minds’ but applicable to some levels by which humans are intelligent.

There are observations of how the brain works with predictions—used by generative AI. But the brain does not make predictions. Some in the same set of incoming electrical impulses split, to interact with a set of chemical impulses, like they did before, if the input matches, processing continues, if not, the other part interacts with the right ones, correcting what is termed as an error. Splits are a feature of electrical impulses. Splits are also responsible for what is explained as an internal model of the world, used to relate with the external, so that whatever comes in, a part breaks out going like they did or choosing where to go around close options, continuing in that form as long as it works, but correcting at times. This cancels the explanation that the brain is making guesses or does controlled hallucination.

There are also sequences, as features of sets of electrical impulses as well as bounce points of chemical impulses that explain associative memory.

Whenever a set of electrical impulses interacts with a set of chemical impulses, they often strike at ‘stairs or drifts’. It is the fills or rationing of chemical impulses at these stairs or drifts that decides what learning is, how intelligence is developed and provided, how emotions work, how language is possible, why psychiatric drugs have side-effects and so forth. Stairs or drifts available within sets of chemical impulses make different brain centers more specialized to functions like modulation, memory, and so on. The stairs or drifts also make humans have more abilities [or properties] than other organisms.

Human intelligence is within impulses, their features and interactions, not simply a cortical hierarchical provision. LLMs are artificial intelligence, measuring theoretically in brain science, with how the impulses of mind output natural intelligence.

Check Also

Russian Offensive Campaign Assessment, November 18, 2024

Russian officials continued to use threatening rhetoric as part of efforts to deter the United …