Israel v Hamas

Why?

Remember, I was answering the comment by Cryst that ALL (his word) Muslims want all Jews killed and wiped off the face of the earth.

If ALL 2.17 billion Muslims wanted this, they would have done it. They have untold wealth and vast military power on a national level, and sheer numbers and religious fervor on an individual level. Who could have stopped them?
I think you may want to look at the numbers involved in the wars against Israel. Heavily outnumbered just meant more of the enemy were killed.
Haven't most of the Muslim countries already cleared their Jewry? And then onto the next religion.
 
Why?

Remember, I was answering the comment by Cryst that ALL (his word) Muslims want all Jews killed and wiped off the face of the earth.

If ALL 2.17 billion Muslims wanted this, they would have done it. They have untold wealth and vast military power on a national level, and sheer numbers and religious fervor on an individual level. Who could have stopped them?
People with bigger guns. The US has an arsenal that could wipe out the planet many times.

Do be serious.
 
Why?

Remember, I was answering the comment by Cryst that ALL (his word) Muslims want all Jews killed and wiped off the face of the earth.

If ALL 2.17 billion Muslims wanted this, they would have done it. They have untold wealth and vast military power on a national level, and sheer numbers and religious fervor on an individual level. Who could have stopped them?
They do. Ask a Muslim if he ever cried over the death of a Jew !
 
I’m doing a wissie here. I never said you were stupid. I said you post stupid sometimes. I also have never used any AI to sanction or justify what I post. I read and see. Being a humble person I also know some of my posts are incorrect and with you being captain hindsight you wouldn’t know what that’s like clearly !!!
You might like to try! It could verify your opinions. I suspect that those here who dislike it only do so because it often doesn’t agree with them. That’s, of course, for the very good reason that their opinions are outliers
 
You might like to try! It could verify your opinions. I suspect that those here who dislike it only do so because it often doesn’t agree with them. That’s, of course, for the very good reason that their opinions are outliers
Ask AI how accurate it is. According to the BBC the answer is not very.
 
Last edited:
You might like to try! It could verify your opinions. I suspect that those here who dislike it only do so because it often doesn’t agree with them. That’s, of course, for the very good reason that their opinions are outliers
I can’t dislike something I haven’t tried. What’s your point in that remark ?
I haven’t tried being a woman and neither have you but you seem to think it’s ok without trying it yourself !
 
Ask AI how accurate it is. According to the BBC then answer is not very.
So I did. They all gave the same kind of answer, which actually is pretty obvious given what they are. No opinion is “accurate”. Only facts are. The real question is whether their opinion is more deeply researched and balanced? Especially when compared with any opinion originating on the political fringes. This is what Grok returned, used because of its tendency to be sympathetic to right wing opinion:-

“The accuracy of AI, including models like me, depends on several factors: the quality and quantity of training data, the design of the model, the task at hand, and the context in which it’s used. Here’s a breakdown:

1. Training Data: AI models are only as good as the data they’re trained on. If the data is comprehensive, diverse, and high-quality, the AI is more likely to provide accurate outputs. For example, my knowledge is continuously updated, which helps me stay relevant, but gaps or biases in data can lead to errors or skewed perspectives.

2. Task Specificity: AI accuracy varies by task. For factual queries (e.g., math or historical dates), AI can be highly accurate if the information is clear and well-documented. For complex or subjective questions (e.g., predicting future events or interpreting nuanced social issues), accuracy drops because these involve uncertainty or human judgment.


3. Model Limitations: No AI is perfect. Models can misinterpret ambiguous questions, lack real-time data for breaking events, or struggle with highly specialized knowledge. For instance, I can analyze X posts or web content for real-time insights, but if the sources are incomplete or biased, my response might reflect those limitations.

4. Benchmarks and Studies: Studies on AI accuracy (e.g., evaluations of language models like GPT or LLaMA) show they can achieve over 90% accuracy on specific tasks like question-answering or translation, but performance drops on open-ended or controversial topics. For example, a 2023 study on large language models found they were 85-95% accurate on factual recall but only 60-70% reliable for nuanced reasoning tasks.

15. Practical Example: If you ask me for the boiling point of water (100°C at standard pressure), I’m almost certainly accurate. If you ask about public sentiment on X about a breaking news event, I can provide a snapshot based on posts, but it might not capture the full spectrum of opinions.”

60-70% accuracy for nuanced reasoning tasks would be beyond the most optimistic expectations of many posters here!
 
So I did. They all gave the same kind of answer, which actually is pretty obvious given what they are. No opinion is “accurate”. Only facts are. The real question is whether their opinion is more deeply researched and balanced? Especially when compared with any opinion originating on the political fringes. This is what Grok returned, used because of its tendency to be sympathetic to right wing opinion:-

“The accuracy of AI, including models like me, depends on several factors: the quality and quantity of training data, the design of the model, the task at hand, and the context in which it’s used. Here’s a breakdown:

1. Training Data: AI models are only as good as the data they’re trained on. If the data is comprehensive, diverse, and high-quality, the AI is more likely to provide accurate outputs. For example, my knowledge is continuously updated, which helps me stay relevant, but gaps or biases in data can lead to errors or skewed perspectives.

2. Task Specificity: AI accuracy varies by task. For factual queries (e.g., math or historical dates), AI can be highly accurate if the information is clear and well-documented. For complex or subjective questions (e.g., predicting future events or interpreting nuanced social issues), accuracy drops because these involve uncertainty or human judgment.


3. Model Limitations: No AI is perfect. Models can misinterpret ambiguous questions, lack real-time data for breaking events, or struggle with highly specialized knowledge. For instance, I can analyze X posts or web content for real-time insights, but if the sources are incomplete or biased, my response might reflect those limitations.

4. Benchmarks and Studies: Studies on AI accuracy (e.g., evaluations of language models like GPT or LLaMA) show they can achieve over 90% accuracy on specific tasks like question-answering or translation, but performance drops on open-ended or controversial topics. For example, a 2023 study on large language models found they were 85-95% accurate on factual recall but only 60-70% reliable for nuanced reasoning tasks.

15. Practical Example: If you ask me for the boiling point of water (100°C at standard pressure), I’m almost certainly accurate. If you ask about public sentiment on X about a breaking news event, I can provide a snapshot based on posts, but it might not capture the full spectrum of opinions.”

60-70% accuracy for nuanced reasoning tasks would be beyond the most optimistic expectations of many posters here!
The BBC quotes a report claiming AI misrepresents news stories 45% of the time. In any case it's not particularly accurate on anything other than verifiable facts which it's just repeating.
 
Ask AI how accurate it is. According to the BBC the answer is not very.
AI relies on libraries of information from which it extrapolates answers.

Those sources of information were created by humans.

Halfwits like Wisbech don’t realise that it makes AI just as vulnerable to manipulation or error as any other source of information.

Is the Wisbech bot a perfect example itself?
 

Holmesdale Online Shop

Back
Top