Uncovering Propaganda, Deception, and Bias in Media Reporting: Using Text Analytics as a Scientific Tool for Automatic Detection (Part 3)

By Dr. John Aaron

This session is the third of a three-part series that builds upon the analytical foundation and tools discussed in sessions #1 and #2. It is not necessary to have participated in the earlier sessions to understand or get value from this event.

The use of computer-generated text in the form of AI chat and AI driven search on the internet is expanding rapidly. These tools, often branded as artificial intelligence (AI), are typically not as intelligent as we are led to believe. Yet, the ubiquitous adoption of these tools opens the door to massive propaganda and deception over the internet.

 

Session 3 explores the underlying concepts of computer-generated text, how to identify it using Wordstat in combination with other statistical tools, how it can be made useful when used in good faith, and how to avoid being deceived by it. The session also compares the intelligence approach of AI versus the actual method of intelligence found in the human brain.

 

Session details:

  • Example of computer-generated text used in internet chatting.
  • Examining a common text generation method: Long Short-Term Memory (LSTM’s). Explaining how the method works, explaining how it can be useful, and identifying its limitations compared to human intelligence.
  • Discussing ways to Identify computer generated text using Wordstat.
  • How computer-generated text potentially opens the door to propaganda and deception. How to avoid being deceived by it.
  • AI operations versus operations in the human brain.
  • Mitigating the societal risks of computer-generated text over the internet.