What does Steven Pinker think of AGI?

What does Steven Pinker think of AGI?

Steven Pinker is a leading linguist and has written a lot about human mind, intelligence and artificial intelligence. What does he say about AGI?

Steven Pinker, the renowned language expert and cognitive psychologist, has made some realistic takes on artificial intelligence and artificial general intelligence. So what does Steven Pinker think of AGI or artificial general intelligence? Let's explore.

Steven Pinker and AGI: An unusual dive into his skepticism

Pinker's skepticism is a valuable counterpoint to the AGI enthusiasm. It reminds us to temper expectations, appreciate challenges, and approach AI research with both ambition and humility.

The Language Challenge: Pinker, a language expert, points out the complexity of human language. AI can mimic patterns but lacks true understanding. Understanding sarcasm or nuanced communication, according to Pinker, goes beyond current AI capabilities. So, there’s a big challenge before we can think of AGI.

But let’s decipher Pinker’s two valuable insights about AGI that I can agree with.

Also read: Akshar Prabhu Desai’s take on Pinker's public dialogue

1. AIs and AGIs come with a knowledge gap: Accept it

According to Pinker, advances in AI now are more in specialized fields than in general intelligence. He is in favor of concentrating on these useful uses rather than on speculative AGI.

Pinker questions AI's lack of integrated knowledge. While AI excels in specific tasks, it lacks the broad understanding humans have. This is where all hype about AGI falls flat.

So, Pinker highlights the necessity of interdisciplinary collaboration in AGI research. This is where I believe he was on point.

He cites the need for insights from a variety of fields to handle the complex obstacles of creating human-like intelligence in computers.

Also read: 5 Substack channels to get more insights about AI and AGI

2. Consider the evolutionary perspective of humans to understand AGI

Pinker argues human intelligence evolved over millions of years, shaped by natural selection. Building AGI without replicating this evolution is like trying to build a spaceship without studying bird feathers or simple aerodynamics.

Each human or individual possesses unique thinking patterns and reasoning abilities, making it impossible for another person to copy. So, it’s a really tough nut job for AGI to find a fix.

Let me add the instance of a chess prodigy who takes years of practice and experience to shape their skills. Can we replicate this in a computer algorithm? Pinker suggests it's not that simple.

Disagreement with Pinker's opinion on AGI

I disagree with Pinker's assertion that there is no reason to fear that artificial intelligence will turn against humans since his viewpoint ignores the many risks and unknowns that are involved.

First, according to Pinker, we would never grant artificial general intelligence (AGI) dominion over the universe unless we had fully tested its powers.

Testing the intelligence of an AI does not, however, ensure security because the system may trick testers or change over time in an unpredictable way.

The general consensus is that we don't have efficient ways to screen for possibly harmful conduct or successfully contain an extremely powerful artificial intelligence.

Furthermore, Pinker underestimates the difficulty of integrating AGI with human values.

Given the complexity, diversity, and ever-changing nature of human values, it is challenging to provide AGI with precise instructions.

Even if the first objectives are set, a superintelligent AI can perceive them incorrectly or create subgoals that are harmful to the welfare of humans.

Moreover, Pinker claims that the development of AGI follows slow, cautious protocols.

However, in a competitive rush to accomplish AGI, businesses and technocrats could prioritize speed over critical safety measures, which increases the risk of oversight or negligence.

With so much at stake, one slip-up could lead to a disastrous chain of events. Or we could end up with an AGI system that is morally incompatible with humans and our society.

Pinker's outlook: A toned approach

In summary, Pinker takes a toned or nuanced approach, but some of the suggestions he makes still require careful examination.

While acknowledging its development, he calls for more practical research, particularly in the areas of embodied intelligence and neural networks. He cautions against getting overexcited.

I should also point out that Pinker's viewpoint implies a long and unpredictable journey for the present AI ecosystem and its participants.

But you should remain grounded if you wish to advance and make rational thoughts and actions. You do not need to take the AGI hoopla at face value.

Even though there can be differences of opinion, let Pinker's suggestions guide and motivate us to find the best way to close the gap between AI and AGI. It is very remarkable that machines and humans are working together to solve the puzzle of intelligence.

Did you find this article valuable?

Support AI Authority by becoming a sponsor. Any amount is appreciated!