A.I.’s Reality Distortion

The real problem of humanity is the following: we have Paleolithic emotions; medieval institutions; and god-like technology.” – E.O. Wilson

The luddites were the original cassandra’s of cyberaugery (phophetic warning of our digital futures) warning us of how technology in the wrong hands could replace us or even make our lives worse in the long run while maintaining a veneer of making our lives easier, better, or more efficient. In reality all technology in the hands of corporations does is bait and switch, exploiting you while keeping you hooked, to ever degraded and increasingly useless platforms. Today, with A.I. we face a new type of exploitation, one that can run circles around us and enclose more and more of what we use to verify reality. What do you do when prices on store shelves change a few hundred times a day? How do find verified sources when A.I. generated books, bots, and politically motivated editing are constantly rewriting the past? Or when they’re molding the future at such a rapid rate of change we can’t even keep up with what’s happening before we are trapped in it’s grip like some human game of mousetrap?

We aren’t going to keep ahead of the curve. But we can adapt and help lift each other out of the corporate mind f*ck we’re all trapped in. That’s where we at now. Keeping each other from drowning and sharing tips on how to swim. So, below I’ll share some links on where to learn more about these things. But I also want to talk about a new a.i. ploy that poses as useful, but is honestly one of the scariest things I’ve seen to date when it comes to reality distortion.

So what is it? Google Notebook LM. It portrays itself as a shareable research notebook where you can drop sources into it and make notes. You can use a.i. to ask questions of the sources and it will pull the relevant information, in theory, because are you really going to sit there and fact check everything after you’ve already gotten your supposed answer? Are you even going to read all those sources you dumped into it? Hell no! That would negate the time a.i. supposedly saved you right? At this point you’ve already lost. You’re relying on a piece of software to tell you what’s true. Not only that, but you can’t really verify the sources you’ve put into in the first place. Why? Because everyone else is using A.I. to write their articles and it goes on and on… A rabbit hole of worse and worse unverified unchecked useless sources to spew out unverified ‘information’.

So, what happens if we dump legitimate sources into it? I’m glad you asked! I did this for you and created a new project where I dropped the following sources:

  • ‘A Dialogue with Arne Naess on Social Ecology and Deep Ecology (1988-1997)’ by John P. Clark
  • ‘Social Ecology and Communalism’ by Murray Bookchin

And then instead of fact checking the a.i. which I wasn’t too thrilled to do (told you it works on everyone), I saw something that worried me even further, a ‘generate’ button. This button completes the reality distortion by erasing your ability to tell fact from fiction, reality from unreality. And it does this by scooping up those sources you fed it and generating a two person podcast on the topic that sound legit. I’ve embedded the podcast I generated below for you to listen to below. Decide for yourselves if a.i. is a threat. And if you find yourself as worried as myself? Go check out those links below to learn more.

Learn More:


0 responses to “A.I.’s Reality Distortion”