Subscribe
Notify of
guest

20 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Martin Bollis
Martin Bollis
1 year ago

I asked it “how many black people were killed by police in the US in 2020.”

The answer, shortened was “I don’t know” but referenced Wapos fatal force data base and the Mapping Police Violence Project before finishing with:

“It is important to recognise and address issues of police violence and systemic racism….

So, struggles with facts but no shortage of opinions

Martin Bollis
Martin Bollis
1 year ago

I asked it “how many black people were killed by police in the US in 2020.”

The answer, shortened was “I don’t know” but referenced Wapos fatal force data base and the Mapping Police Violence Project before finishing with:

“It is important to recognise and address issues of police violence and systemic racism….

So, struggles with facts but no shortage of opinions

Paddy Taylor
Paddy Taylor
1 year ago

ChatGPT is a terrible name. Just call it Woke-ipedia and have done with it.
A “fact-checked” source of supposedly unvarnished, unfiltered truth – yet it has “learned” from information sources that fit the narrative.
An AI bot that insists, with what passes for a straight-face, that gender is merely a social construct and men can bear children, just as long as they call themselves a woman.

Paddy Taylor
Paddy Taylor
1 year ago

ChatGPT is a terrible name. Just call it Woke-ipedia and have done with it.
A “fact-checked” source of supposedly unvarnished, unfiltered truth – yet it has “learned” from information sources that fit the narrative.
An AI bot that insists, with what passes for a straight-face, that gender is merely a social construct and men can bear children, just as long as they call themselves a woman.

Richard Craven
Richard Craven
1 year ago

I asked it whether men could become women, and it said they could, which is untrue, so bvgger that for a game of woke soldiers.

Hardee Hodges
Hardee Hodges
1 year ago
Reply to  Richard Craven

If you work at it, you can make it stutter an answer. It does get caught up. But each stutter gets a human to refine the issue.

Darlene Craig
Darlene Craig
1 year ago
Reply to  Richard Craven

What it you asked it whether a mammal born with testes and a p***s could bear children?

Hardee Hodges
Hardee Hodges
1 year ago
Reply to  Richard Craven

If you work at it, you can make it stutter an answer. It does get caught up. But each stutter gets a human to refine the issue.

Darlene Craig
Darlene Craig
1 year ago
Reply to  Richard Craven

What it you asked it whether a mammal born with testes and a p***s could bear children?

Richard Craven
Richard Craven
1 year ago

I asked it whether men could become women, and it said they could, which is untrue, so bvgger that for a game of woke soldiers.

Paddy Taylor
Paddy Taylor
1 year ago

Another brilliant machine that is
Designed by computers.
Measured by lasers.
Built by robots.
Programmed by Roberts
…… spot the weak link.

Paddy Taylor
Paddy Taylor
1 year ago

Another brilliant machine that is
Designed by computers.
Measured by lasers.
Built by robots.
Programmed by Roberts
…… spot the weak link.

Steve Elliott
Steve Elliott
1 year ago

One problem with any AI system is that it has to be trained. It has to learn the rules from known data. If there is any bias in the training data set then the AI will learn the bias and it’s known feature of many AIs that they can actually amplify the bias. The bias in the data can come about because they use humans to classify the original training data set. So for example if you have a long list of statements and you get humans to say whether each statement is true or false and then use that list to train the AI then any bias in the choices made by the humans will be absorbed into the AI. There are other ways that bias can enter the training data sets.

Last edited 1 year ago by Steve Elliott
Hardee Hodges
Hardee Hodges
1 year ago
Reply to  Steve Elliott

And if we are to arrive at a helpful bot, once we see the bias we must find ways to reduce the bias. ChatGPT has added filters to avoid that balance.

Brian Villanueva
Brian Villanueva
1 year ago
Reply to  Steve Elliott

Hard for an AI to “learn the rules from known data” when the people who made it no longer agree about what is known (ie: can a man give birth?, are whites inherently racist?, etc…) These things are shibboleths among “educated” Westerners today, and it’s educated Westerners who define the rules.

Hardee Hodges
Hardee Hodges
1 year ago
Reply to  Steve Elliott

And if we are to arrive at a helpful bot, once we see the bias we must find ways to reduce the bias. ChatGPT has added filters to avoid that balance.

Brian Villanueva
Brian Villanueva
1 year ago
Reply to  Steve Elliott

Hard for an AI to “learn the rules from known data” when the people who made it no longer agree about what is known (ie: can a man give birth?, are whites inherently racist?, etc…) These things are shibboleths among “educated” Westerners today, and it’s educated Westerners who define the rules.

Steve Elliott
Steve Elliott
1 year ago

One problem with any AI system is that it has to be trained. It has to learn the rules from known data. If there is any bias in the training data set then the AI will learn the bias and it’s known feature of many AIs that they can actually amplify the bias. The bias in the data can come about because they use humans to classify the original training data set. So for example if you have a long list of statements and you get humans to say whether each statement is true or false and then use that list to train the AI then any bias in the choices made by the humans will be absorbed into the AI. There are other ways that bias can enter the training data sets.

Last edited 1 year ago by Steve Elliott
Jim Davis
Jim Davis
1 year ago

Based on this article I registered on ChatGPT and played with it. I purposely asked questions similar to those mentioned in this article. In addition to the failure of the program to give straight answers to any questions including the words “black”, “police”, “LGBTQ” or a specific culture, “Korean”, the program delivered an admonishment that I should not be making assumptions or judgements based on these specific words and explained that everyone is different and has value. So ChatGPT is just a digital version of the New York Times, i.e. not many facts, the few facts are slanted, and all kinds of woke judgement is included in the delivered response.

Last edited 1 year ago by Jim Davis
Jim Davis
Jim Davis
1 year ago

Based on this article I registered on ChatGPT and played with it. I purposely asked questions similar to those mentioned in this article. In addition to the failure of the program to give straight answers to any questions including the words “black”, “police”, “LGBTQ” or a specific culture, “Korean”, the program delivered an admonishment that I should not be making assumptions or judgements based on these specific words and explained that everyone is different and has value. So ChatGPT is just a digital version of the New York Times, i.e. not many facts, the few facts are slanted, and all kinds of woke judgement is included in the delivered response.

Last edited 1 year ago by Jim Davis
Jim Veenbaas
Jim Veenbaas
1 year ago

“ In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”

Hmm. Let me guises whet the outcome will be.

Jonas Moze
Jonas Moze
1 year ago
Reply to  Jim Veenbaas

The General wears a dress?

Jonas Moze
Jonas Moze
1 year ago
Reply to  Jim Veenbaas

The General wears a dress?

Jim Veenbaas
Jim Veenbaas
1 year ago

“ In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”

Hmm. Let me guises whet the outcome will be.

Steve Jolly
Steve Jolly
1 year ago

Why should an AI designed and built by humans be any less biased or flawed than the humans that built it? Why should artificial intelligence be any more ‘objective’ or ‘unbiased’ than natural intelligence?

Steve Jolly
Steve Jolly
1 year ago

Why should an AI designed and built by humans be any less biased or flawed than the humans that built it? Why should artificial intelligence be any more ‘objective’ or ‘unbiased’ than natural intelligence?

Neil van Wyk
Neil van Wyk
1 year ago

I asked it:

Why Donald Trump shouldn’t have access to nuclear codes

Why Hillary Clinton shouldn’t have access to nuclear codes

Why Joe Biden shouldn’t have access to nuclear codes

It is as if gpt-3 has been writing msm articles these past couple of years.

Neil van Wyk
Neil van Wyk
1 year ago

I asked it:

Why Donald Trump shouldn’t have access to nuclear codes

Why Hillary Clinton shouldn’t have access to nuclear codes

Why Joe Biden shouldn’t have access to nuclear codes

It is as if gpt-3 has been writing msm articles these past couple of years.

Jeremy Bray
Jeremy Bray
1 year ago

AI will merely reflect the society that created it. Of course it will reflect the legal background and in the west the legal background requires a lot of true statements to be suppressed supposedly in the name of social harmony. A current Russian or German National Socialist AI would surely have different biases.
You want the truth? AI’s approach is that you can’t handle the truth.

Jeremy Bray
Jeremy Bray
1 year ago

AI will merely reflect the society that created it. Of course it will reflect the legal background and in the west the legal background requires a lot of true statements to be suppressed supposedly in the name of social harmony. A current Russian or German National Socialist AI would surely have different biases.
You want the truth? AI’s approach is that you can’t handle the truth.

michael stanwick
michael stanwick
1 year ago

Jordan Peterson gives his experiences with ChatGPT and comments on further developments.
https://www.youtube.com/watch?v=MpDW-CZVfq8

Alan Hawkes
Alan Hawkes
1 year ago

Ask it to account for a white van man flying a Union Jack flag.

Rasmus Fogh
Rasmus Fogh
1 year ago

Hmmm. The problem is real, but the example “black people commit more crime than white people” begs the question ‘where, and under what circumstances’. ‘Men commit more crime than women’ is such a strong effect that you cannot really deny it. But consider, for instance’ ‘rich people are more intelligent than poor people’. I am sure it is true, to the extent that wealth is positively correlated with IQ test score, buit which way the causation goes and what is really happening would seem to be pretty much unanswerable.

Ian L
Ian L
1 year ago
Reply to  Rasmus Fogh

Bias in = Bias out. The term AI is quite dishonest. Artificial: yes, intelligent: no.

Now if (despite the bias) the bot argued back and called BS that could indicate a glimmer of thought…

Frank Eigler
Frank Eigler
1 year ago
Reply to  Rasmus Fogh

The subject of the article here is not mere “bias” in the dataset, but subsequent manual AI system lobotomy by its operators.

Ian L
Ian L
1 year ago
Reply to  Rasmus Fogh

Bias in = Bias out. The term AI is quite dishonest. Artificial: yes, intelligent: no.

Now if (despite the bias) the bot argued back and called BS that could indicate a glimmer of thought…

Frank Eigler
Frank Eigler
1 year ago
Reply to  Rasmus Fogh

The subject of the article here is not mere “bias” in the dataset, but subsequent manual AI system lobotomy by its operators.

Rasmus Fogh
Rasmus Fogh
1 year ago

Hmmm. The problem is real, but the example “black people commit more crime than white people” begs the question ‘where, and under what circumstances’. ‘Men commit more crime than women’ is such a strong effect that you cannot really deny it. But consider, for instance’ ‘rich people are more intelligent than poor people’. I am sure it is true, to the extent that wealth is positively correlated with IQ test score, buit which way the causation goes and what is really happening would seem to be pretty much unanswerable.