Q-Star (Q*): OpenAI Researchers Warned Board of, “A Powerful Artificial Intelligence Discovery That They Said Could Threaten Humanity”

November 22nd, 2023

The Reuters piece below contains a significant error:

The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI)

That’s not correct.

Superintelligence is not the same as artificial general intelligence. Superintelligence far exceeds the capabilities of artificial general intelligence. An AGI that is capable of recursive self-improvement could lead to an intelligence explosion and a superintelligence:

The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.

Book: Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Microsoft didn’t make job offers to every OpenAI employee because they were just playing tiddlywinks.

Microsoft probably knew about it.

Via: Reuters:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm was a key development ahead of the board’s ouster of Altman, the poster child of generative AI, the two sources said.

Related: Narrow AI vs. General AI vs. Super AI: Key Comparisons

Leave a Reply

You must be logged in to post a comment.