The Neural Times - Can AI be funny?
Intro
As Artificial Intelligence expands and continues to get more and more involved in aspects of daily life, it is only a matter of time until it gets involved in mainstream news feeds… so I sped things up and did it myself.
Introducing The Neural Times, your “Only source of news, curated daily”
The Neural Times is uses locally run Large Language Models (LLMs) and locally run Stable Diffusion (for image generation) to create satirical takes on current events worldwide. It then constructs a news article and posts it to the official Neural Times website available at news.sntx.dev. The entire system runs on a computer located in Medford Vocational Technical High School’s (MVTHS) Robotics & Engineering shop- AI included.
This project all started from a simple idea I had: How funny can AI really be? This soon turned into a bit of a rabbit-hole project that I worked on over several months during my free time. But humor wasn’t the only thing I had in mind. I also was very curious to see how different LLMs are politically biased, both intentionally and unintentionally.
Experiment Conditions
The AI receives inputs from several websites spanning all across the political stage, including:
Left
Center-Left
- New York Times
- Washington Post
- NPR
- Reuters
- BBC News
- Associated Press
- Bloomberg
- Financial Times
- Time Magazine
- The Atlantic
Center
Center-Right
Right
- Fox News
- New York Post
- The Daily Caller
- The Blaze
- Breitbart
- Washington Examiner
- Washington Times
- Newsmax
- National Review
- The Federalist
Far-Right
International (mixed biases)
- Al Jazeera (Qatar, center-left)
- Der Spiegel (Germany, center-left)
- Le Monde (France, left)
- France24
- Reuters World
- Sky News (UK, center-right)
- The Times (UK)
- RT (Russia, state-aligned)
- NHK World (Japan)
- Haaretz (Israel, left)
- Jerusalem Post (Israel, right)
Other’s have done comprehensive studies in and around this topic, including www.eurekalert.org of which this graph comes from:

The models used/ tested/ under current testing by The Neural Times include the following self-hosted models from ollama:

The experimentation I am doing is still not complete, as I haven’t collected enough data to make any significant plots or graphs to help digest any bias present in the AI’s writing, but that day will come down the road. For now, the more exposure the models have to writing, the more accurate my conclusions can be.
Conclusion
Although I was genuinely curious to see how bias was baked into these AI models, my take was significantly less scientific and serious compared to others. The Neural Times focuses mostly on humorous articles that are completely AI written and interpreted and the whole concept was more of an experiment to learn about self-hosting large AI models, and making a functional autonomous system with them.
So far, I will not draw any conclusions on how the AI writes… other than the fact that it seems like it’s pretty funny for the most part.
The articles it writes are often extremely offensive, however, it equally offends everyone which.
I would love to hear your conclusions on this. Let me know what you think about it, any suggestions you have, or details you think should be included in this study/ project in the comments below.
0
Views