ChatGPT Is About to Change The World for the Worse
The hoopla around ChatGPT has obscured the fact that the AI invasion of the thought space is nothing new. It has been going on for decades…but that doesn’t make it any less terrifying. In fact, it makes it more terrifying.
ChatGPT is nothing more than the logical extension of the first “content spinners” that appeared on the internet in 2004.
The content spinners were crude programs that allowed the user to modify an existing piece of writing by selectively changing individual words, resorting to a thesaurus to find synonyms for key words used in the original sample. The programs quickly progressed to the point where they could alter than structures of the paragraphs to create very different-looking articles based on the same data.
This is, of course, plagiarism if the original article belonged to someone else, and it has been going on forever. (There’s even a category of plagiarism now called “self-plagiarism” which consists of rewriting your own articles in order to resell them to different outlets.)
Long before the first personal computers appeared, writers were paraphrasing other writers and rewriting their lesser efforts into better ones. Shakespeare did it more than once. Dylan has been accused of doing it.
What ChatGPT adds to the mix is simply a look-up function that I believe uses an Elastic Stack implementation to replicate the functions performed by consumer-oriented search engines such as Google and, since Elastic Stack is the engine behind consumer search engines in order to collect data relevant to the term of the original request.
In order to combat the proliferation of plagiarized articles in academic circles, and across the World Wide Web, plagiarism checkers started appearing before the turn of the century. These programs operate by scanning an article, parsing out key phrases, and then searching for iterations of the same sequences of words across the internet.
Together, the content spinners and the plagiarism checkers were one of the first instances of artificial intelligence actually impacting intellectual activities in a meaningful way.
And then things got crazy.
In 2005, some M.I.T. students created an app that can generate realistic, well-formatted, and completely bogus scientific papers on computer science topics that were then submitted to and accepted by conferences and journals. The M.I.T. crew did this because they were concerned about the number of clearly unqualified papers appearing in conference proceedings and scholarly journals. (Go to https://pdos.csail.mit.edu/archive/scigen/ to see it.)
In retaliation, a French computer scientist wrote a program that can sniff out and identify bogus papers. (Well, he was actually attempting to address the same problem by giving the academic community a tool to identify bogus papers.)
At around the same time, a reporter for the Los Angeles Times created a program that could automatically generate predictive articles about upcoming earth tremors, which were automatically posted in the paper. (Don’t know if that’s still being used.)
In 2014, if I remember the date correctly, the single highest-rated “scholar” (in terms of the number of citations of his articles) on Google Scholar turned out to be a completely bogus, nonexistent person – with faked academic credentials from universities that did not actually exist, a long list of faked articles, with faked peer reviews from non-existent peer reviewers, all undertaken by a team of scientists who had become alarmed by the number of faked articles popping up in scientific journals.
This did not actually turn out very well because an unknown number of academicians have actually cited this fake scholar’s fake articles in their “legitimate” articles, raising questions about how widely this bogus information has spread from the corrupted original sources.
In the meantime, there are now more than 30,000 scientific journals out there, the vast majority of which are poorly managed “pay-to-play” outlets that charge desperate-to-get-published academicians exorbitant fees to post their articles on their “online-only” websites where the contributors also “peer-review” each other’s articles.
Even worse, many highly-respected and really-peer-reviewed scientific journals – including The New England Journal of Medicine, Lancet, Science, and Nature, among others – have had to retract numerous articles that have turned out to be fraudulent in one way or another. This isn’t news. It’s been happening for years, but the percentage of apparently bogus articles is increasing, according to testimony from the editors-in-chief of NEJM and Lancet, both of whom averred that as many as fifty percent of the papers they’ve published were of dubious value.
If you think this is much ado about nothing, think again, because much of the misinformation that has proliferated throughout our political environment is based upon bogus articles published in these “pay-to-play” journals, giving ideologues the ability to cite seemingly reputable sources that appear to substantiate their claims.
Around six years ago, there was a spate of articles in the popular press authored by real scientists who were raising the issue that many journals reject papers that challenge or disprove previously published articles, noting that when articles based on bad science are finally retracted, the retractions often go largely unnoticed, which leaves the bogus data in circulation. In the same articles, questions were raised about the qualifications of many of the reporters who write about scientific topics while having little expertise in the subjects they are covering. What’s worse, the editors who are reviewing those articles have even weaker scientific backgrounds than the reporters who wrote the articles…but that doesn’t stop them from composing increasingly misleading headlines for otherwise reputable articles as well as the less reputable ones.
At this stage of the developmental cycle, programs like ChatGPT require a human being to post the articles and comments generated by the AIs. This will result in an exponential increase in the amount of traffic proliferating through social media, the net effect of which will be to further reduce the amount of scrutiny any specific article or comment may receive.
When the AI programs eventually gain the ability to open their own social media accounts, the already overloaded information environment will inundated with a massive influx of articles and comments posted by the AIs themselves. which will make it increasingly difficult for real human beings to find and connect with each other.
The exponential increases in data proliferation will make it impossible to “police” that data to insure that the information being injected into our collective consciousness is, in fact, meaningful and valid, to the point where the only entities capable of analyzing that data will be the AIs themselves., which leaves us in the position of trusting the foxes to guard the hen house.
The next step will be AIs that actually read the stuff that other AIs create. Oh, wait. That’s already here. If you use Grammarly, be careful because the Grammarly AI-bot has an agenda of its own: The perpetuation of the discredited Oxford Comma which even Oxford University no longer supports.
AIs creating art, music, fiction, and poetry makes me feel I no longer have any function in the world as either an essayist or a poet and since I am what I think, what will I be then?
If that doesn’t terify you AIs writing scientific papers and getting them published absolutely should.
What about you?
Suzanne
12/11/2022 @ 5:54 pm
Alan don’t be scared 🙂
The only area I know about is what’s happening with AI and visual art. Artists who have grabbed it head on are doing some incredible things, images impossible to be made otherwise. Students are eagerly exploring also. I just watched a vid of James Gurney, the Dinotopia illustrator (an illustrator rock star) as he created an AI image that was a percentage of AI plus a percentage of him working traditionally–amazing. It seems necessary–for now anyway–to shift creative thinking from working solo, to working in collaboration with digital information, choosing and cherry-picking and assembling, a new way of creating a collage.
Remember how we felt about photoshop in the beginning? Painting was pronounced dead for the fifth or sixth time since camera invention was supposed to kill it. It’s a tool like anything else, at least for visual art. Maybe you and I won’t figure out how to use it, or even want to, but that doesn’t mean in the hands of other creatives, great new work won’t be made.
Sidebar: given that you’re a politico, google ‘AI Trump’. I think you’ll enjoy what gets returned 🙂
Alan Milner
12/13/2022 @ 4:12 pm
This demonstration (AI Trump) has been deliberately programmed to avoid any of Trump’s racist, anti-feminist, anti-gay and antisemitic comments. The real Donald Trump is much scarier than this one.
Ron Powell
12/12/2022 @ 2:53 am
hmmmm!
Suzanne
12/12/2022 @ 7:20 am
trying the attachment function out for Ron…..
Ron Powell
12/12/2022 @ 10:02 am
Suzanne,
If this is what you imagine a Trump internet troll looks like, you get no argument from me…
Suzanne
12/12/2022 @ 12:10 pm
Ron, a redditor gave one of the AI image generating programs the text prompt for Cthulhu and Trump with microphones….the troll reading works too!
JP Hart
12/15/2022 @ 4:59 am
‘The Brilliance and Weirdness of ChatGPT’
‘A new chatbot from OpenAI is inspiring awe, fear, stunts and attempts to circumvent its guardrails.’
NYT 5 DEC 2022: By Kevin Roose proffers additional perspective which includes game-changers such as ‘…might make Google obsolete…’; and: [sic] ‘The potential societal implications of ChatGPT are too big to fit into one column. Maybe this is, as some commenters have posited, the beginning of the end of all white-collar knowledge work, and a precursor to mass unemployment. Maybe it’s just a nifty tool that will be mostly used by students, Twitter jokesters and customer service departments until it’s usurped by something bigger and better.’ Apparently the dynamics are already ‘old hat’ in Twitter sphere. And Mr. Roose posits that ChatGPT plausibly will alter our ‘space/time’ as much as the iPhone has. Frankly truth ought never be elusive. Right now I am garnering glints of futurist Alvin Toffler’s 1970 ‘FutureShock’ as well as the quite impactful 1964 Herbert Marcuse’s
‘One-Dimensional Man’ wherein the prowess of subliminal simulacrum secretly machinates impulse and behavior e.g. ‘false needs’. It might be interesting to assign ChatGPT: thoughts, please, as though you are an ennui-riddled catatonic … kindly avoid comma splices as well as any graphic detail of the abyss.
JP Hart
12/15/2022 @ 6:52 am
Fifty-four years ago Stanley Kubrick’s 2001: A Space Odyssey profiled the indelible HAL with its oft poignant chatter … the be-bop-ish bot hinted at omniscience as well as the other 3 omnis of God. (Everyone wants to know it all but nobody likes a know-it-all) so there you go! a pronounced ‘sleeves-up’ creativity of that epic sci-fi more than half a century ago nowadays has assumed the helm or at least has its titanium paws on the wheel. Fast forward with the parallel strides of nanotechnology, it’s just an awesome sauce that HAL did not self-replicate. No three hots and a cot required. Though after a pause we all may be walking antiques, potentially ChatGPT could brainstorm on preventive medicine, man’s inhumanity to man, famine, weather alteration and control as well as the calculus of homelessness.
And to plagiarize Paul Harvey, good day! And better tomorrows!
Alan Milner
12/15/2022 @ 5:47 pm
As someone who built AI exemplars in the mid-eighties, I can assert one fact. AIs cannot create new knowledge. All they can do is extract and organize already existing information, without the ability to evaluate the validity of the data.
JP Hart
12/15/2022 @ 8:31 pm
COOL! COOL! COOL!
salute your compassion
leadership and sanity
0 & creativety 0 &
selflessness as well
as your intrepid talent
<<>>
JP Hart
02/23/2023 @ 9:02 am
~naturally~
><HA77EY’S-CALM-ET><
Predicted next perihelion: July 28, 2061. Promethean?
… My teeth itch yet I need those National Park Quarters …
I may/may not be available. Boik@dawn:111LO;} still I wonder if Caisson's disease & caribou correlate conspire corruption? That sand flea not far from Wounded Knee. Shall be free. Tasking if it is CHATBOT or CHATBOOT?
First rule of ice fishing: keep your socks dry. MY-O-MY