ChatGPT Is About to Change The World for the Worse

The hoopla around ChatGPT has obscured the fact that the AI invasion of the thought space is nothing new. It has been going on for decades…but that doesn’t make it any less terrifying. In fact, it makes it more terrifying.

ChatGPT is nothing more than the logical extension of the first “content spinners”  that appeared on the internet in 2004.

The content spinners were crude programs that allowed the user to modify an existing piece of writing by selectively changing individual words, resorting to a thesaurus to find synonyms for key words used in the original sample. The programs quickly progressed to the point where they could alter than structures of the paragraphs to create very different-looking articles based on the same data.

This is, of course, plagiarism if the original article belonged to someone else, and it has been going on forever. (There’s even a category of plagiarism now called “self-plagiarism” which consists of rewriting your own articles in order to resell them to different outlets.)

Long before the first personal computers appeared, writers were paraphrasing other writers and rewriting their lesser efforts into better ones. Shakespeare did it more than once. Dylan has been accused of doing it.

What ChatGPT adds to the mix is simply a look-up function that  I believe uses an Elastic Stack implementation to replicate the functions performed by consumer-oriented search engines such as Google and, since Elastic Stack is the engine behind consumer search engines in order to collect data relevant to the term of the original request.

In order to combat the proliferation of plagiarized articles in academic circles, and across the World Wide Web,  plagiarism checkers started appearing before the turn of the century.  These programs operate by scanning an article, parsing out key phrases, and then searching for iterations of the same sequences of words across the internet.

Together, the content spinners and the plagiarism checkers were one of the first instances of artificial intelligence actually impacting intellectual activities in a meaningful way.

And then things got crazy.

In 2005, some M.I.T. students created an app that can generate realistic, well-formatted, and completely bogus scientific papers on computer science topics that were then submitted to and accepted by conferences and journals. The M.I.T.  crew did this because they were concerned about the number of clearly unqualified papers appearing in conference proceedings and scholarly journals. (Go to https://pdos.csail.mit.edu/archive/scigen/ to see it.)

In retaliation, a French computer scientist wrote a program that can sniff out and identify bogus papers. (Well, he was actually attempting to address the same problem by giving the academic community a tool to identify bogus papers.)

At around the same time, a reporter for the Los Angeles Times created a program that could automatically generate predictive articles about upcoming earth tremors, which were automatically posted in the paper. (Don’t know if that’s still being used.)

In 2014, if I remember the date correctly, the single highest-rated “scholar” (in terms of the number of citations of his articles) on Google Scholar turned out to be a completely bogus, nonexistent person – with faked academic credentials from universities that did not actually exist, a long list of faked articles, with faked peer reviews from non-existent peer reviewers, all undertaken by a team of scientists who had become alarmed by the number of faked articles popping up in scientific journals.

This did not actually turn out very well because an unknown number of academicians have actually cited this fake scholar’s fake articles in their “legitimate” articles, raising questions about how widely this bogus information has spread from the corrupted original sources.

In the meantime, there are now more than 30,000 scientific journals out there, the vast majority of which are poorly managed “pay-to-play” outlets that charge desperate-to-get-published academicians exorbitant fees to post their articles on their “online-only” websites where the contributors also “peer-review” each other’s articles.

Even worse, many highly-respected and really-peer-reviewed scientific journals – including The New England Journal of Medicine, Lancet, Science, and Nature, among others – have had to retract numerous articles that have turned out to be fraudulent in one way or another. This isn’t news. It’s been happening for years, but the percentage of apparently bogus articles is increasing, according to testimony from the editors-in-chief of NEJM and Lancet, both of whom averred that as many as fifty percent of the papers they’ve published were of dubious value.

If you think this is much ado about nothing, think again, because much of the misinformation that has proliferated throughout our political environment is based upon bogus articles published in these “pay-to-play” journals, giving ideologues the ability to cite seemingly reputable sources that appear to substantiate their claims.

Around six years ago, there was a spate of articles in the popular press authored by real scientists who were raising the issue that many journals reject papers that challenge or disprove previously published articles, noting that when articles based on bad science are finally retracted, the retractions often go largely unnoticed, which leaves the bogus data in circulation. In the same articles, questions were raised about the qualifications of many of the reporters who write about scientific topics while having little expertise in the subjects they are covering. What’s worse, the editors who are reviewing those articles have even weaker scientific backgrounds than the reporters who wrote the articles…but that doesn’t stop them from composing increasingly misleading headlines for otherwise reputable articles as well as the less reputable ones.

At this stage of the developmental cycle, programs like ChatGPT require a human being to post the articles and comments generated by the AIs.  This will result in an exponential increase in the amount of traffic proliferating through social media, the net effect of which will be to further reduce the amount of scrutiny any specific article or comment may receive.

When the AI programs eventually gain the ability to open their own social media accounts, the already overloaded information environment will inundated with a massive influx of articles and comments posted by the AIs themselves.  which will make it increasingly difficult for real human beings to find and connect with each other.

The exponential increases in data proliferation will make it impossible to “police” that data to insure that the information being injected into our collective consciousness is, in fact, meaningful and valid, to the point where the only entities capable of analyzing that data will be the AIs themselves., which leaves us in the position of trusting the foxes to guard the hen house.

The next step will be AIs that actually read the stuff that other AIs create. Oh, wait. That’s already here. If you use Grammarly, be careful because the Grammarly AI-bot has an agenda of its own: The perpetuation of the discredited Oxford Comma which even Oxford University no longer supports.

AIs creating art, music, fiction, and poetry makes me feel I no longer have any function in the world as either an essayist or a poet and since I am what I think, what will I be then?

If that doesn’t terify you AIs writing scientific papers and getting them published absolutely should.

What about you?

 

Loading