BLUE
Profile banner
JK
Jeff Kessler
@jckessler.bsky.social
Senior Lecturer at UIC. UICUF member, parent, Victorianist, grammarian, & writer. PhD, Indiana Univ. Co-Editor/Author of tinyurl.com/55wxre5x
68 followers100 following74 posts
JKjckessler.bsky.social

Further reminder that John Roberts is by far the most damaging figure in American politics.

0
Reposted by Jeff Kessler
AKaktange.bsky.social

Continuing to try to spread this information. "To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select 'Data privacy,' and turn off the option under 'Data for generative AI improvement'.ā€

LinkedIn is training AI on you ā€” unless you opt out with this setting
LinkedIn is training AI on you ā€” unless you opt out with this setting

The professional network now by default grants itself permission to use anything you post to train its artificial intelligence

0
Reposted by Jeff Kessler
CWcameronwilson.bsky.social

As part of a test run for Australia's corporate regulator, AI was used to summarise submissions made by the public. The trial found that AI performed worse in every metric compared with humans. Assessors suggested AI would make more work for people, not less. www.crikey.com.au/20...

AI worse than humans in every way at summarising information, government trial finds
AI worse than humans in every way at summarising information, government trial finds

A test of AI for Australia's corporate regulator found that the technology might actually make more work for people, not less.

32
Reposted by Jeff Kessler
DAabeba.bsky.social

"Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didnā€™t have access to ChatGPT." hechingerreport.org/kids-chatgpt...

Kids who use ChatGPT as a study assistant do worse on tests
Kids who use ChatGPT as a study assistant do worse on tests

Researchers compare math progress of almost 1,000 high school students

10
Reposted by Jeff Kessler
ECecourtem.bsky.social

Never forget this banger post

36
Reposted by Jeff Kessler
AHandrew.heiss.phd

Finally created an official policy for AI/LLMs in class (compaf24.classes.andrewheiss.com/syllabus.htm...doi.org/10.1007/s106...

(See link for full text)

AI, large language models, and bullshit

I highly recommend not using ChatGPT or similar large language models (LLMs) in this class.

I am not opposed to LLMs in many situations. I use GitHub Copilot for computer programming-related tasks all the time, and I have ongoing research where weā€™re experimenting with using Metaā€™s Ollama to try automatically categorizing thousands of nonprofit mission statements. Using LLMs requires careful skill and attention and practice, and they tend to be useful only in specific limited cases.

I am deeply opposed to LLMs for writing.

Google Docs and Microsoft Word now have built in text-generation tools where you can start writing a sentence and let the machine take over the rest. ChatGPT and other services let you generate multi-paragraph essays with plausible-looking text. Please do not use these.
(See link for full text)

Using LLMs and AI to generate a reflection on the weekā€™s readings or to generate a policy brief will not help you think through the materials. You can create text and meet the suggested word count and finish the assignment, but the text will be meaningless. Thereā€™s an official philosophical term for this kind of writing: bullshit (Hicks, Humphries, and Slater 2024; Frankfurt 2005).1

1 Iā€™m a super straight-laced Mormon and, like, never ever swear or curse, but in this case, the word has a formal philosophical meaning (Frankfurt 2005), so it doesnā€™t count :)

Philosophical bullshit is ā€œspeech or text produced without concern for its truth.ā€ (Hicks, Humphries, and Slater 2024, 2) Bullshit isnā€™t truth, but itā€™s also not lies (i.e. the opposite of truth). Itā€™s text that exists to make the author sound like they know what theyā€™re talking about. A bullshitter doesnā€™t care if the text is true or notā€”truth isnā€™t even part of the equation:
(See link for full text)

Do not replace the important work of writing with AI bullshit slop. Remember that the point of writing is to help crystalize your thinking. Chugging out words that make it look like you read and understood the articles will not help you learn.

In your weekly reading reports, I want to see good engagement with the readings. I want to see your thinking process. I want to see you make connections between the readings and between real-world events. I donā€™t want to see a bunch of words that look like a human wrote them. Thatā€™s not useful for future-you. Thatā€™s not useful for me. Thatā€™s a waste of time.

I will not spend time trying to guess if your assignments are AI-generated.2 If you do turn in AI-produced content, I wonā€™t automatically give you a zero. Iā€™ll grade your work based on its own merits. Iā€™ve found that AI-produced content will typically earn a āœ“āˆ’ (50%) or lower on my check-based grading system without me even needing to look for clues
ChatGPT is bullshit

Michael Townsen Hicks, James Humphries & Joe Slater 

Abstract
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called ā€œAI hallucinationsā€. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
43
Reposted by Jeff Kessler
APquatoria.bsky.social

haha oh god ai is going to do so much damage to our schools and our students

8
Reposted by Jeff Kessler
NRresnikoff.bsky.social

Public sector consulting firm with a slide deck / career civil servant with a spreadsheet

18
Reposted by Jeff Kessler
DAabeba.bsky.social

ā€œPeople generally substantially overestimate what the technology is capable of today. In our experience, even basic summarization tasks often yield illegible and nonsensical results. " must read www.404media.co/goldman-sach...

Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable
Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable

One of the world's largest investment banks wonders if generative AI will be worth the huge investment and hype: "will this large spend ever pay off?"

2
Profile banner
JK
Jeff Kessler
@jckessler.bsky.social
Senior Lecturer at UIC. UICUF member, parent, Victorianist, grammarian, & writer. PhD, Indiana Univ. Co-Editor/Author of tinyurl.com/55wxre5x
68 followers100 following74 posts