Amidst Chasing AGI, AI Prediction Models are Losing Popularity Very Fast!

AI

AI

Scientists seem quite excited about AI prediction models but are they losing their popularity among people?

A group of AI researchers claims to have developed a system that can determine a person’s political ideology by looking at their brain scans, using state-of-the-art AI prediction models. Either this is the most powerful AGI system in the known universe, or it’s a complete ruse. It’s a ruse, of course: there’s little cause to be excited. To refute the researcher’s work, you don’t even need to read their paper. All you need is the words “politics change.” But, just for kicks, let’s look at the study itself and see how prediction models function.

 

The experiment was conducted.

174 US college students (median age 21) the vast majority of whom self-identified as liberal were gathered by a team of AI researchers from Ohio State University, University of Pittsburgh, and New York University, who conducted brain scans on them as they completed a brief battery of tests. The researchers gathered a group of young individuals, quizzed them about their political views, and then created a computer that flips a coin to “predict” a person’s political views. Instead of flipping a coin, it purports to perform the same thing by using artificial intelligence algorithms to reportedly analyze brainwave data.

 

The issue at hand

The artificial intelligence must forecast either “liberal” or “conservative,” and “neither” is not a possibility in these systems. So, right away, the AI isn’t capable of forecasting or identifying political events. It has to pick between the data in column A and the data in column B. Let’s imagine we break into the AI center at Ohio State University and mash up all of their data. We replace all of the brainwaves with Rick and Morty memes, then cover my tracks so that the humans don’t notice.

The AGI will still predict whether the trial subjects are conservative or liberal as long as don’t modify the labels on the data. You can either think that the computer has magical data powers that allow it to arrive at a ground truth regardless of the data it is provided, or you can see that the illusion is the same no matter what kind of rabbits you put in the hat.

 

That figure of 70% accuracy is erroneous.

A machine that is 70% accurate in guessing a human’s politics is always 0% accurate in predicting them. This is due to the fact that human political beliefs do not exist as objective truths. There is no such thing as a conservative or liberal intellect. Many people are neither or a mix of the two. Furthermore, many liberals have conservative attitudes and mindsets, and vice versa. The researchers don’t define “conservatism” or “liberalism,” so that’s the first issue we have. They let the subjects they’re studying define it for them keep in mind that the pupils are on average 21 years old.

In the end, this means that the data and labels have no regard for each other. The AI researchers eventually developed a system that has a 50/50 probability of correctly identifying which of two labels they’ve applied to a dataset. Suppose the machine is looking for indicators of conservatism in brainwaves. In that case, homosexuality in facial expressions, or whether someone is more likely to commit a crime based on their skin color, these algorithms all work in the same manner.

They have little choice but to brute force an inference, which they do. They are only allowed to choose from a limited number of labels, so they do. And because they are black-box systems, the researchers have no idea how it all works, making it impossible to figure out why the artificial intelligence makes any particular inference.

 

What is the definition of precision?

Humans aren’t pitted against machines in these experiments. They only construct two criteria, which they then conflate. The scientists will give the prediction job to multiple persons one or two times (depending on the controls). They’ll then repeat the prediction exercise hundreds, thousands, or even millions of times with the AI.

Because the scientists have no idea how the machine will arrive at its conclusions, they can’t just enter the ground truth settings and call it a day. They must educate the AI. This entails repeatedly assigning it the same task example, analyzing data from a few hundred brain scans, and requiring it to run the same algorithms. They’d call it a day and think it was flawless if the machine got 100 percent on the first try for no apparent reason! Even if they have no idea why remember, that everything takes place in a black box.

If it fails to satisfy a large threshold, they change the algorithm’s parameters until it improves, which is more commonly the case. You can envision this by visualizing a scientist tuning a radio signal in through static, without looking at the dial.

Reendex

Must see news