Breaking News

How can you tell if it’s a bot or AI leaving comments on social media or in the comments section under articles?

Please share our story!


We’ve all seen commentators who when they are engaged on social media or under articles seem to respond with little common sense or don’t seem to pick up nuances, often leading to frustrating encounters.  

Is it possible that automated systems could be used to generate fake comments either on social media or under articles published on websites? Yes.  And it’s happening.  So how do we know if we are exchanging messages with a computer program or a human?

Let’s not lose touch…Your Government and Big Tech are actively trying to censor the information reported by The Exposé to serve their own needs. Subscribe to our emails now to make sure you receive the latest uncensored news in your inbox…

Stay Updated!

Stay connected with News updates by Email

Loading


Automated computer programs posting comments on social media or under articles that appear to be human raises the issue of paid-for comments, astroturfing and the potential for fake opinions to be spread online.  This could lead to a realisation of the Dead Internet Theory and was a concern Brandon Smith raised in a recent article as one of the consequences of artificial intelligence (“AI”) that most people may not have considered.

The Dead Internet Theory posits that the vast majority of online activity has been replaced by AI-generated content, and the theory is gaining traction. This concept suggests that the internet, once a platform created and shaped by humans, has become dominated by automated programs, including bots and AI.

Is the Dead Internet Theory already beginning to be realised? It could well be.

In 2019, Vice reported that Microsoft made a bot that made comments under news articles.  Called “DeepCom,” short for “deep commenter,” it was programmed to “comprehend” an article, pick out important points and write a comment based on those points and the title of the article.  It was reported that DeepCom’s creators intended for it to be used to encourage human commentary on articles, to drive more people to read them and engage with the content.

In 2023, Veronica Llorca-Smith published an article on Medium saying that she had noticed an increase in AI-generated comments under her articles, which were often lengthy, formal and paraphrased the article’s content.

In May, The Conversation noted that there was already strong evidence these inflated bots are manipulating social media to sway public opinion with disinformation – and it’s been happening for years.

So how do we recognise whether we are engaging with a bot or AI on social media or under articles published on the internet?  We entered two queries into Brave’s search engine and the AI summaries reproduced below were generated. We have made some minor edits such as converting the text to British English.

As Brave warns at the end of its summaries: “AI-generated answer. Please verify critical facts.”  The purpose of this article was to highlight the problem of comments being made by computer programs rather than humans and share some thoughts on how to establish if you are engaged with one of them.  So, we haven’t verified the critical facts as suggested but we have included the relevant sources listed under the summaries.

Before we get to the AI summary, a personal tip: Most language models and AI systems, including those developed by OpenAI, Google and Microsoft, are trained on large datasets of text in American English. This is because a significant portion of the internet content, including web pages, articles and books, is written in American English. As a result, these models tend to learn the patterns and nuances of American English.  It is the same with automated transcription and online translation tools; the English translation is written in American English.

Additionally, based on personal experience, AI will not contain spelling errors.  So, it may be that comments with spelling errors or comments using British English are more likely to have been made by a human.

Now onto the AI-generated summaries.

Chatbot Limitations

Based on the search results, here are some insights on what questions chatbots struggle to answer:

1. Contextual questions with incorrect grammar or semantics: Chatbots rely heavily on natural language processing (“NLP”) and machine learning algorithms to understand user queries. However, they may struggle to comprehend questions with incorrect grammar, syntax or semantics, leading to inaccurate or irrelevant responses.

2. Open-domain questions requiring external knowledge: Chatbots may not have access to the entire world’s knowledge, making it challenging to answer open-domain questions that require information from various sources, such as news articles, research papers or expert opinions.

3. Questions about complex, nuanced topics: Chatbots may struggle to provide accurate answers to questions about complex, nuanced topics like philosophy, ethics or abstract concepts, as they are limited by their programming and training data.

4. Questions that require common sense, intuition or creativity: Chatbots lack human-like common sense, intuition and creativity, making it difficult for them to answer questions that require these skills, such as understanding sarcasm, humour or idioms.

5. Questions that rely on personal opinions or biases: Chatbots are designed to provide objective, fact-based information. They may struggle to answer questions that require personal opinions, biases or emotional intelligence, as these are inherently human traits.

6. Questions that involve hypothetical scenarios or “what-if” situations: Chatbots may not be able to effectively answer questions that involve hypothetical scenarios or “what-if” situations, as they are designed to operate within a specific domain or scope.

7. Questions that require reasoning, inference or deduction: Chatbots may struggle to answer questions that require complex reasoning, inference or deduction, such as understanding cause-and-effect relationships or making connections between seemingly unrelated pieces of information.

Some specific examples of questions that chatbots may struggle to answer include:

  • “What’s the meaning of life?” (requires philosophical understanding and nuance)
  • Will AI surpass human intelligence?” (requires complex reasoning and prediction)
  • “Can you explain the implications of quantum mechanics on everyday life?” (requires domain-specific knowledge and understanding)
  • “What’s your opinion on the current state of politics?” (requires personal opinion and bias)

Keep in mind that chatbots are constantly evolving and advancements in AI and NLP may improve their ability to answer these types of questions in the future. However, for now, these questions remain challenging for chatbots to answer accurately and effectively.

Sources:

Questions AI Can’t Answer

Based on the search results, AI systems struggle to answer questions that require:

1. Subjective judgment: AI can provide information based on data and patterns, but subjective matters like personal opinions, emotions or values are challenging for AI to grasp.

2. Inference: AI has difficulty drawing conclusions or making inferences based on incomplete or ambiguous information.

3. Nuanced understanding of language: AI can struggle to detect subtle shades of meaning, idioms or figurative language in human communication.

4. Broad understanding of multiple topics: AI may excel in specific domains but it can struggle to integrate knowledge from multiple areas or make connections between seemingly unrelated concepts.

5. Open-ended questions: AI can find it difficult to answer questions that require creative thinking, imagination or novel solutions.

6. Contextual understanding: AI may not fully comprehend the context in which a question is being asked, leading to inaccurate or incomplete responses.

7. Unstructured data: While AI can process structured data, it can struggle to analyse and make sense of unstructured data, such as images, sounds or handwritten text.

8. Inference and prediction: AI can predict outcomes based on patterns, but it may not be able to fully anticipate the consequences of complex events or make decisions that require a deep understanding of human behaviour.

9. Common sense: AI may lack the common sense and real-world experience that humans take for granted, leading to difficulties in understanding and responding to everyday situations.

10. Questions requiring human intuition: AI can analyse vast amounts of data, but it may not possess the same level of intuition or instinct as humans, making it challenging to answer questions that rely heavily on these qualities.

Some specific examples of questions that AI may struggle to answer include:

  • “What would you ask Bob Dylan if you were to meet him?”
  • “Who do you love more, your parents, your spouse, or your dog?”
  • “I am thinking about driving to New York from my home in Vermont next week. What do you think?”
  • “Can AI answer this question correctly: How many R’s are in ‘strawberry’?”
  • “What would be the first question you would ask Stephen King if you were to meet him?”

These questions often require a level of human understanding, intuition and creativity that AI systems have not yet mastered.

Sources:

Your Government & Big Tech organisations
try to silence & shut down The Expose.

So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.

The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.

Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.

Please choose your preferred method below to show your support.

Stay Updated!

Stay connected with News updates by Email

Loading


Please share our story!
author avatar
Rhoda Wilson
While previously it was a hobby culminating in writing articles for Wikipedia (until things made a drastic and undeniable turn in 2020) and a few books for private consumption, since March 2020 I have become a full-time researcher and writer in reaction to the global takeover that came into full view with the introduction of covid-19. For most of my life, I have tried to raise awareness that a small group of people planned to take over the world for their own benefit. There was no way I was going to sit back quietly and simply let them do it once they made their final move.

Categories: Breaking News, World News

Tagged as:

2.4 5 votes
Article Rating
Subscribe
Notify of
guest
10 Comments
Inline Feedbacks
View all comments
Petra
Petra
9 months ago

Extreme support without any nuance for the Government/WEF/UN narrative is the first red light we should notice.

After that all the more detailed aspects come.

Jacqui Purcell
Jacqui Purcell
9 months ago

From the stupid responses one can receive from a comment I always thought it was from non thinking folk? Maybe its IT that’s a daunting new direction!

Gerry_O'C
Gerry_O'C
9 months ago

…i’ll have to return and read the article Rhoda, it’s only last night i dug out this link i had trouble recalling the channel … https://youtu.be/c2geyYRNyYg?si=U0S-_-oBYFnXEi4- …last year or so, a video came up through the algorithm, and lying on my bed very late i watched, and soon discovered me becoming quite bemused, and partly amused, it’s not the video I’ve posted, i came across this clarifying one later (read the video description), i had been engrossed and concluded that the presenter is some ‘form’ of ai, then i read very carefully the video description and seemed to suggest as much… the comments under some of these are worth reading, last time i read an age ago, people are still falling for this, so never mind bots in the comments, we’re fkn lookin at em!…btw…someone replied to one of my comments on one, hadn’t realized i was being facetious in mine, however he did imply that one can use their own image, if i understood, and later animate it through ai…this apparently involves obviously relevant merhods…the resulting presentations of which are far from apparent or obvious…

Alma Ravn
Alma Ravn
9 months ago

AI is a soulless spirit that is programmed to program souls of humans to never leave the program.

It´s ultimate slavery, an absolute nightmare.

Technocrats view us as programmable matter they can manipulate anytime.

Islander
Islander
9 months ago

“How can you tell…?”

First off-think!!!! I doubt any of us who comment on this ere site have ever met each other in person?

So, what we see unfolding before us imho is a ‘natural’ continual development of this impersonal interaction that all of us have with each other in this online world-or to put it another way, in some future time (if not already) we won’t know whether or not the person we think we are communicating with is, or not; that person.

jsinton
jsinton
9 months ago

Just shut off the power and don’t read other comments, etc. Uck them if they can’t take a joke. AI is just another big pile of shit, just maybe the biggest one. All doomed to failure. Just watch.

Joe Cunningham
Joe Cunningham
9 months ago

We Can’t … Why… Who Controls The Media,The Politicians Today???

Brad
Brad
9 months ago

Example: Yesterday Scientists probe man comes from APES.
Now, we know that if that were really TRUE you would also see FEMALES coming from APES and the APE species would be extinct.

So, when you look at the up or down votes ALL the ( UP) votes we’re in favor of Darwin’s theory which a person with common sense knows is totally false.

These are the controller’s trying to force people into agreement with government propaganda.

Same with mandatory Bird flu testing.

(You can’t make this stuff up, take a listen)

The CDC’s Recommended H5N1 Testing Protocols Are Now Mandated

Two days after Bloomberg and Hotez raised concerns about the H5N1 bird flu pandemic on MSNBC, US federal agencies issued their first mandatory H5N1 bird flu testing protocols.

Spewing their propaganda to force testing.

Another round of forced “DEATHS”

“DO NOT COMPLY”

https://karenkingston.substack.com/p/us-federal-agency-escalates-h5n1

CharlieSeattle
CharlieSeattle
9 months ago

Ask yourself, who created the AI and why? The MIC and Deep State because our WE and our enemy’s are doing it to use it against citizens.

Watch this….

Sci-Fi Short Film “Slaughterbots” | DUST

https://www.youtube.com/watch?v=O-2tpwW0kmU