Skip to main content

Learning AI Poorly: What is AGI?

·2 mins

(originally on LinkedIn)

Most importantly, Happy Thanksgiving! I am so very thankful for everyone’s help, encouragement, and ideas for this weekly article I’ve been writing (still haven’t missed one!) But, in the spirit of taking time off for family and traditions, this is going to be a shorty.

You might have heard that a company called OpenAI had a bit of boardroom drama this week. The comments on various social media sites were my favorite. There was a lot of talk about AGI and some comments were downright whimsical - “Sam Altman secretly used company resources to create his own AGI and it got so good it was taking over the company,” was a personal favorite of mine.

What is AGI though?

Artificial General Intelligence #

The idea behind Artificial General Intelligence (AGI) is that it is an AI that is so good it can learn any task that a human or animal can perform and likely do it in a way that is indistinguishable from a human. The truth is if/when AGI is possible, it will likely quickly surpass any human capability (in quality and speed) for the majority of tasks. Yikes!

Another definition says that AGI would also have to experience sentience or consciousness. But, to be honest, I don’t think that is strictly necessary or even desirable for whatever company gets it to work. If a trained neural network can do something better than a human automatically, it is going to rain money. If the silly thing can also think, my guess is that’s a bug, not a feature.

One reason why AGI has been all over the news/posts/comments this week is it is a primary goal of companies like OpenAI, DeepMind, and Anthropic. Like, AGI is the reason these companies exist and the reason you can use ChatGPT to write weekly articles for you.

Not that anyone would do that… right?

Time to go stuff myself and pass out on the couch….