BLUE
Profile banner
AH
Andrew Heiss šŸ‚šŸŽƒ
@andrew.heiss.phd
Assistant professor at Georgia State University, formerly at BYU. 6 kids. Study NGOs, human rights, #PublicPolicy, #Nonprofits, #Dataviz, #CausalInference. #rstats forever. #LDSforHarris andrewheiss.com
3.3k followers2k following1.9k posts
AHandrew.heiss.phd

Finally created an official policy for AI/LLMs in class (compaf24.classes.andrewheiss.com/syllabus.htm...doi.org/10.1007/s106...

(See link for full text)

AI, large language models, and bullshit

I highly recommend not using ChatGPT or similar large language models (LLMs) in this class.

I am not opposed to LLMs in many situations. I use GitHub Copilot for computer programming-related tasks all the time, and I have ongoing research where weā€™re experimenting with using Metaā€™s Ollama to try automatically categorizing thousands of nonprofit mission statements. Using LLMs requires careful skill and attention and practice, and they tend to be useful only in specific limited cases.

I am deeply opposed to LLMs for writing.

Google Docs and Microsoft Word now have built in text-generation tools where you can start writing a sentence and let the machine take over the rest. ChatGPT and other services let you generate multi-paragraph essays with plausible-looking text. Please do not use these.
(See link for full text)

Using LLMs and AI to generate a reflection on the weekā€™s readings or to generate a policy brief will not help you think through the materials. You can create text and meet the suggested word count and finish the assignment, but the text will be meaningless. Thereā€™s an official philosophical term for this kind of writing: bullshit (Hicks, Humphries, and Slater 2024; Frankfurt 2005).1

1 Iā€™m a super straight-laced Mormon and, like, never ever swear or curse, but in this case, the word has a formal philosophical meaning (Frankfurt 2005), so it doesnā€™t count :)

Philosophical bullshit is ā€œspeech or text produced without concern for its truth.ā€ (Hicks, Humphries, and Slater 2024, 2) Bullshit isnā€™t truth, but itā€™s also not lies (i.e. the opposite of truth). Itā€™s text that exists to make the author sound like they know what theyā€™re talking about. A bullshitter doesnā€™t care if the text is true or notā€”truth isnā€™t even part of the equation:
(See link for full text)

Do not replace the important work of writing with AI bullshit slop. Remember that the point of writing is to help crystalize your thinking. Chugging out words that make it look like you read and understood the articles will not help you learn.

In your weekly reading reports, I want to see good engagement with the readings. I want to see your thinking process. I want to see you make connections between the readings and between real-world events. I donā€™t want to see a bunch of words that look like a human wrote them. Thatā€™s not useful for future-you. Thatā€™s not useful for me. Thatā€™s a waste of time.

I will not spend time trying to guess if your assignments are AI-generated.2 If you do turn in AI-produced content, I wonā€™t automatically give you a zero. Iā€™ll grade your work based on its own merits. Iā€™ve found that AI-produced content will typically earn a āœ“āˆ’ (50%) or lower on my check-based grading system without me even needing to look for clues
ChatGPT is bullshit

Michael Townsen Hicks, James Humphries & Joe Slater 

Abstract
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called ā€œAI hallucinationsā€. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
43

DMclimatemichael.bsky.social

Greatly appreciate your sharing this!

0
RTrifka.bsky.social

Dear god FINALLY an honest Aā€Iā€ policy

0
ABanyabernstein.bsky.social

Beautiful. Hope you donā€™t mind if if I riff off this to replace my overly milquetoast current version. And thanks for the article link!

0
KTkatekilla.bsky.social

This is SO SO good. Thank you for sharing it.

0
BRclasticdetritus.bsky.social

this is great, thanks for sharing

0

This is the way!!

0
WWwrenispinkle.bsky.social

I would be so happy to see a professor give this kind of genuine, solid advice at the start of any course

0
KWweedenkim.bsky.social

šŸ“Œ

0
CCstewartcoles.bsky.social

šŸ“Œ

0
Sshannoninsea.bsky.social

šŸ“Œ

0
Profile banner
AH
Andrew Heiss šŸ‚šŸŽƒ
@andrew.heiss.phd
Assistant professor at Georgia State University, formerly at BYU. 6 kids. Study NGOs, human rights, #PublicPolicy, #Nonprofits, #Dataviz, #CausalInference. #rstats forever. #LDSforHarris andrewheiss.com
3.3k followers2k following1.9k posts