Instead, I suspect that we will become able to spot the “flavor” of certain LLMs writing, the way we can often spot a visual gen now. “Oh look, this story has extra fingers.”
I generally find that LLM “creative” writing sounds a lot like a high school sophomore that has discovered an online thesaurus.
I’ve heard a lot of people saying genAI has weird word choice, in the “that is a very large vocabulary” kind of way. I worry how many voracious-reader kids are going to get their essays flagged as AI.
Teachers have been doing this for decades ("this reads like an encyclopedia entry"/"Wikpedia article"/"AI".) Some day, they might get good enough, and I can tell them how to improve that process, but like Fermat, I'm not going to because I'm running out of characters, and also don't want to.
They have a lot of tells, for sure. Minimal dialogue, and uninteresting even when it's there — always the most obvious thing to say and way to say it. /+
I can already spot GPT text, tbh. Not definitively, but you can kind of smell it.
As AI works are not copyrightable, can't wait for the lawsuits when Disney uses it on their next movie and fanboys "steal" it.
I feel sorry for the poor people (and I am sure there will be some) whose style triggers false positives for whatever heuristics people start using. Already seen multiple posts about that happening with vocabulary in schoolwork.
I mean, I can think of various tells for AI output in student-to-faculty email, personal statements for applications and in assignments…
Concur. When I got blocked on a letter, an intro, or a conclusion, it helped if someone wrote what they thought I meant based on what I had written. They were usually wrong, but nearby, and that was enough for me to write what I really meant. Would not surprise me if AI ended up in that role.