I asked it “how many black people were killed by police in the US in 2020.”
The answer, shortened was “I don’t know” but referenced Wapos fatal force data base and the Mapping Police Violence Project before finishing with:
“It is important to recognise and address issues of police violence and systemic racism….
So, struggles with facts but no shortage of opinions
Martin Bollis
1 year ago
I asked it “how many black people were killed by police in the US in 2020.”
The answer, shortened was “I don’t know” but referenced Wapos fatal force data base and the Mapping Police Violence Project before finishing with:
“It is important to recognise and address issues of police violence and systemic racism….
So, struggles with facts but no shortage of opinions
Paddy Taylor
1 year ago
ChatGPT is a terrible name. Just call it Woke-ipedia and have done with it.
A “fact-checked” source of supposedly unvarnished, unfiltered truth – yet it has “learned” from information sources that fit the narrative.
An AI bot that insists, with what passes for a straight-face, that gender is merely a social construct and men can bear children, just as long as they call themselves a woman.
Paddy Taylor
1 year ago
ChatGPT is a terrible name. Just call it Woke-ipedia and have done with it.
A “fact-checked” source of supposedly unvarnished, unfiltered truth – yet it has “learned” from information sources that fit the narrative.
An AI bot that insists, with what passes for a straight-face, that gender is merely a social construct and men can bear children, just as long as they call themselves a woman.
Richard Craven
1 year ago
I asked it whether men could become women, and it said they could, which is untrue, so bvgger that for a game of woke soldiers.
What it you asked it whether a mammal born with testes and a p***s could bear children?
Richard Craven
1 year ago
I asked it whether men could become women, and it said they could, which is untrue, so bvgger that for a game of woke soldiers.
Paddy Taylor
1 year ago
Another brilliant machine that is
Designed by computers.
Measured by lasers.
Built by robots.
Programmed by Roberts
…… spot the weak link.
Paddy Taylor
1 year ago
Another brilliant machine that is
Designed by computers.
Measured by lasers.
Built by robots.
Programmed by Roberts
…… spot the weak link.
Steve Elliott
1 year ago
One problem with any AI system is that it has to be trained. It has to learn the rules from known data. If there is any bias in the training data set then the AI will learn the bias and it’s known feature of many AIs that they can actually amplify the bias. The bias in the data can come about because they use humans to classify the original training data set. So for example if you have a long list of statements and you get humans to say whether each statement is true or false and then use that list to train the AI then any bias in the choices made by the humans will be absorbed into the AI. There are other ways that bias can enter the training data sets.
Hard for an AI to “learn the rules from known data” when the people who made it no longer agree about what is known (ie: can a man give birth?, are whites inherently racist?, etc…) These things are shibboleths among “educated” Westerners today, and it’s educated Westerners who define the rules.
Hard for an AI to “learn the rules from known data” when the people who made it no longer agree about what is known (ie: can a man give birth?, are whites inherently racist?, etc…) These things are shibboleths among “educated” Westerners today, and it’s educated Westerners who define the rules.
Steve Elliott
1 year ago
One problem with any AI system is that it has to be trained. It has to learn the rules from known data. If there is any bias in the training data set then the AI will learn the bias and it’s known feature of many AIs that they can actually amplify the bias. The bias in the data can come about because they use humans to classify the original training data set. So for example if you have a long list of statements and you get humans to say whether each statement is true or false and then use that list to train the AI then any bias in the choices made by the humans will be absorbed into the AI. There are other ways that bias can enter the training data sets.
Last edited 1 year ago by Steve Elliott
Jim Davis
1 year ago
Based on this article I registered on ChatGPT and played with it. I purposely asked questions similar to those mentioned in this article. In addition to the failure of the program to give straight answers to any questions including the words “black”, “police”, “LGBTQ” or a specific culture, “Korean”, the program delivered an admonishment that I should not be making assumptions or judgements based on these specific words and explained that everyone is different and has value. So ChatGPT is just a digital version of the New York Times, i.e. not many facts, the few facts are slanted, and all kinds of woke judgement is included in the delivered response.
Last edited 1 year ago by Jim Davis
Jim Davis
1 year ago
Based on this article I registered on ChatGPT and played with it. I purposely asked questions similar to those mentioned in this article. In addition to the failure of the program to give straight answers to any questions including the words “black”, “police”, “LGBTQ” or a specific culture, “Korean”, the program delivered an admonishment that I should not be making assumptions or judgements based on these specific words and explained that everyone is different and has value. So ChatGPT is just a digital version of the New York Times, i.e. not many facts, the few facts are slanted, and all kinds of woke judgement is included in the delivered response.
Last edited 1 year ago by Jim Davis
Jim Veenbaas
1 year ago
“ In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”
“ In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”
Hmm. Let me guises whet the outcome will be.
Steve Jolly
1 year ago
Why should an AI designed and built by humans be any less biased or flawed than the humans that built it? Why should artificial intelligence be any more ‘objective’ or ‘unbiased’ than natural intelligence?
Steve Jolly
1 year ago
Why should an AI designed and built by humans be any less biased or flawed than the humans that built it? Why should artificial intelligence be any more ‘objective’ or ‘unbiased’ than natural intelligence?
Neil van Wyk
1 year ago
I asked it:
Why Donald Trump shouldn’t have access to nuclear codes
Why Hillary Clinton shouldn’t have access to nuclear codes
Why Joe Biden shouldn’t have access to nuclear codes
It is as if gpt-3 has been writing msm articles these past couple of years.
Neil van Wyk
1 year ago
I asked it:
Why Donald Trump shouldn’t have access to nuclear codes
Why Hillary Clinton shouldn’t have access to nuclear codes
Why Joe Biden shouldn’t have access to nuclear codes
It is as if gpt-3 has been writing msm articles these past couple of years.
Jeremy Bray
1 year ago
AI will merely reflect the society that created it. Of course it will reflect the legal background and in the west the legal background requires a lot of true statements to be suppressed supposedly in the name of social harmony. A current Russian or German National Socialist AI would surely have different biases.
You want the truth? AI’s approach is that you can’t handle the truth.
Jeremy Bray
1 year ago
AI will merely reflect the society that created it. Of course it will reflect the legal background and in the west the legal background requires a lot of true statements to be suppressed supposedly in the name of social harmony. A current Russian or German National Socialist AI would surely have different biases.
You want the truth? AI’s approach is that you can’t handle the truth.
Ask it to account for a white van man flying a Union Jack flag.
Rasmus Fogh
1 year ago
Hmmm. The problem is real, but the example “black people commit more crime than white people” begs the question ‘where, and under what circumstances’. ‘Men commit more crime than women’ is such a strong effect that you cannot really deny it. But consider, for instance’ ‘rich people are more intelligent than poor people’. I am sure it is true, to the extent that wealth is positively correlated with IQ test score, buit which way the causation goes and what is really happening would seem to be pretty much unanswerable.
The subject of the article here is not mere “bias” in the dataset, but subsequent manual AI system lobotomy by its operators.
Rasmus Fogh
1 year ago
Hmmm. The problem is real, but the example “black people commit more crime than white people” begs the question ‘where, and under what circumstances’. ‘Men commit more crime than women’ is such a strong effect that you cannot really deny it. But consider, for instance’ ‘rich people are more intelligent than poor people’. I am sure it is true, to the extent that wealth is positively correlated with IQ test score, buit which way the causation goes and what is really happening would seem to be pretty much unanswerable.
I asked it “how many black people were killed by police in the US in 2020.”
The answer, shortened was “I don’t know” but referenced Wapos fatal force data base and the Mapping Police Violence Project before finishing with:
“It is important to recognise and address issues of police violence and systemic racism….
So, struggles with facts but no shortage of opinions
I asked it “how many black people were killed by police in the US in 2020.”
The answer, shortened was “I don’t know” but referenced Wapos fatal force data base and the Mapping Police Violence Project before finishing with:
“It is important to recognise and address issues of police violence and systemic racism….
So, struggles with facts but no shortage of opinions
ChatGPT is a terrible name. Just call it Woke-ipedia and have done with it.
A “fact-checked” source of supposedly unvarnished, unfiltered truth – yet it has “learned” from information sources that fit the narrative.
An AI bot that insists, with what passes for a straight-face, that gender is merely a social construct and men can bear children, just as long as they call themselves a woman.
ChatGPT is a terrible name. Just call it Woke-ipedia and have done with it.
A “fact-checked” source of supposedly unvarnished, unfiltered truth – yet it has “learned” from information sources that fit the narrative.
An AI bot that insists, with what passes for a straight-face, that gender is merely a social construct and men can bear children, just as long as they call themselves a woman.
I asked it whether men could become women, and it said they could, which is untrue, so bvgger that for a game of woke soldiers.
If you work at it, you can make it stutter an answer. It does get caught up. But each stutter gets a human to refine the issue.
What it you asked it whether a mammal born with testes and a p***s could bear children?
If you work at it, you can make it stutter an answer. It does get caught up. But each stutter gets a human to refine the issue.
What it you asked it whether a mammal born with testes and a p***s could bear children?
I asked it whether men could become women, and it said they could, which is untrue, so bvgger that for a game of woke soldiers.
Another brilliant machine that is
Designed by computers.
Measured by lasers.
Built by robots.
Programmed by Roberts
…… spot the weak link.
Another brilliant machine that is
Designed by computers.
Measured by lasers.
Built by robots.
Programmed by Roberts
…… spot the weak link.
One problem with any AI system is that it has to be trained. It has to learn the rules from known data. If there is any bias in the training data set then the AI will learn the bias and it’s known feature of many AIs that they can actually amplify the bias. The bias in the data can come about because they use humans to classify the original training data set. So for example if you have a long list of statements and you get humans to say whether each statement is true or false and then use that list to train the AI then any bias in the choices made by the humans will be absorbed into the AI. There are other ways that bias can enter the training data sets.
And if we are to arrive at a helpful bot, once we see the bias we must find ways to reduce the bias. ChatGPT has added filters to avoid that balance.
Hard for an AI to “learn the rules from known data” when the people who made it no longer agree about what is known (ie: can a man give birth?, are whites inherently racist?, etc…) These things are shibboleths among “educated” Westerners today, and it’s educated Westerners who define the rules.
And if we are to arrive at a helpful bot, once we see the bias we must find ways to reduce the bias. ChatGPT has added filters to avoid that balance.
Hard for an AI to “learn the rules from known data” when the people who made it no longer agree about what is known (ie: can a man give birth?, are whites inherently racist?, etc…) These things are shibboleths among “educated” Westerners today, and it’s educated Westerners who define the rules.
One problem with any AI system is that it has to be trained. It has to learn the rules from known data. If there is any bias in the training data set then the AI will learn the bias and it’s known feature of many AIs that they can actually amplify the bias. The bias in the data can come about because they use humans to classify the original training data set. So for example if you have a long list of statements and you get humans to say whether each statement is true or false and then use that list to train the AI then any bias in the choices made by the humans will be absorbed into the AI. There are other ways that bias can enter the training data sets.
Based on this article I registered on ChatGPT and played with it. I purposely asked questions similar to those mentioned in this article. In addition to the failure of the program to give straight answers to any questions including the words “black”, “police”, “LGBTQ” or a specific culture, “Korean”, the program delivered an admonishment that I should not be making assumptions or judgements based on these specific words and explained that everyone is different and has value. So ChatGPT is just a digital version of the New York Times, i.e. not many facts, the few facts are slanted, and all kinds of woke judgement is included in the delivered response.
Based on this article I registered on ChatGPT and played with it. I purposely asked questions similar to those mentioned in this article. In addition to the failure of the program to give straight answers to any questions including the words “black”, “police”, “LGBTQ” or a specific culture, “Korean”, the program delivered an admonishment that I should not be making assumptions or judgements based on these specific words and explained that everyone is different and has value. So ChatGPT is just a digital version of the New York Times, i.e. not many facts, the few facts are slanted, and all kinds of woke judgement is included in the delivered response.
“ In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”
Hmm. Let me guises whet the outcome will be.
The General wears a dress?
The General wears a dress?
“ In the right hands, it can be used to eliminate political biases endemic to the hiring processes of these organisations. In the wrong hands, it may permanently catechise a particular ideology.”
Hmm. Let me guises whet the outcome will be.
Why should an AI designed and built by humans be any less biased or flawed than the humans that built it? Why should artificial intelligence be any more ‘objective’ or ‘unbiased’ than natural intelligence?
Why should an AI designed and built by humans be any less biased or flawed than the humans that built it? Why should artificial intelligence be any more ‘objective’ or ‘unbiased’ than natural intelligence?
I asked it:
Why Donald Trump shouldn’t have access to nuclear codes
Why Hillary Clinton shouldn’t have access to nuclear codes
Why Joe Biden shouldn’t have access to nuclear codes
It is as if gpt-3 has been writing msm articles these past couple of years.
I asked it:
Why Donald Trump shouldn’t have access to nuclear codes
Why Hillary Clinton shouldn’t have access to nuclear codes
Why Joe Biden shouldn’t have access to nuclear codes
It is as if gpt-3 has been writing msm articles these past couple of years.
AI will merely reflect the society that created it. Of course it will reflect the legal background and in the west the legal background requires a lot of true statements to be suppressed supposedly in the name of social harmony. A current Russian or German National Socialist AI would surely have different biases.
You want the truth? AI’s approach is that you can’t handle the truth.
AI will merely reflect the society that created it. Of course it will reflect the legal background and in the west the legal background requires a lot of true statements to be suppressed supposedly in the name of social harmony. A current Russian or German National Socialist AI would surely have different biases.
You want the truth? AI’s approach is that you can’t handle the truth.
Jordan Peterson gives his experiences with ChatGPT and comments on further developments.
https://www.youtube.com/watch?v=MpDW-CZVfq8
Ask it to account for a white van man flying a Union Jack flag.
Hmmm. The problem is real, but the example “black people commit more crime than white people” begs the question ‘where, and under what circumstances’. ‘Men commit more crime than women’ is such a strong effect that you cannot really deny it. But consider, for instance’ ‘rich people are more intelligent than poor people’. I am sure it is true, to the extent that wealth is positively correlated with IQ test score, buit which way the causation goes and what is really happening would seem to be pretty much unanswerable.
Bias in = Bias out. The term AI is quite dishonest. Artificial: yes, intelligent: no.
Now if (despite the bias) the bot argued back and called BS that could indicate a glimmer of thought…
The subject of the article here is not mere “bias” in the dataset, but subsequent manual AI system lobotomy by its operators.
Bias in = Bias out. The term AI is quite dishonest. Artificial: yes, intelligent: no.
Now if (despite the bias) the bot argued back and called BS that could indicate a glimmer of thought…
The subject of the article here is not mere “bias” in the dataset, but subsequent manual AI system lobotomy by its operators.
Hmmm. The problem is real, but the example “black people commit more crime than white people” begs the question ‘where, and under what circumstances’. ‘Men commit more crime than women’ is such a strong effect that you cannot really deny it. But consider, for instance’ ‘rich people are more intelligent than poor people’. I am sure it is true, to the extent that wealth is positively correlated with IQ test score, buit which way the causation goes and what is really happening would seem to be pretty much unanswerable.