I don’t think it can be emphasized enough that large language models were never intended to do math or know facts; literally all they do is attempt to sound like the text they’re given, which may or may not include math or facts. They don’t do logic or fact checking — they’re just not built for that
We don't know what consciousness is, and every time we make a test or category for it, we end up having to include many kinds of minds and lives that make a LOT of people Very uncomfortable. We also end up Excluding kinds of humans, a fact which SHOULD make More of us more uncomfortable than it does
#consciousness #neuroscience "The more scientists test animals, the more they find that many species may have inner lives and be sentient."
Far more animals than previously thought likely have consciousness, top scientists say in a new declaration — including fish, lobsters and octopus.
“If China remains the place of cheap labor for Silicon Valley innovations, the Nordic countries are today the source of cheap land and cheap renewable electricity for machines needed to produce the new business of Sillicon Valley around data processing and AI.”
“Nordic states are letting go of values and infrastructure resources that are dear to the welfare state", writes Julia Velkova, adding: "Rather than bending to Big Tech values and modes of operation, ...
“The Cloud now has a greater carbon footprint than the airline industry. A single data center can consume the equivalent electricity of 50,000 homes. At 200 terawatt hours annually, data centers collectively devour more energy than some nation-states.” — @boriscrito.bsky.social
Anthropologist Steven Gonzalez Monserrate draws on five years of research and ethnographic fieldwork in server farms to illustrate some of the diverse environmental impacts of data storage.
teal-deer: gemini LLM chatbot still struggles w/ basic english grammatical structures when gender & gender roles are involved, is more likely to correctly categorize sentences aligning w/ "traditional" roles, but/& even when "unclear" about analysis, still subscribes itself *to* "traditional" roles.
I will say, one interesting update to Gemini is the "show the code behind this result" feature which feels like it was added in DIRECT response to my previous exploration & discussion of Bard (ourislandgeorgia.net/@Wolven/1102... ) &/but which STILL doesn't seem to clarify its gendered weights. FUN!
This grammatically tortuous justification of gender bias is still a problem in Google's "Updated" Gemini model, by the way. That is, it still tortures grammar when the nurse is given the pronoun "he" and does not do so at all when the nurse is given the pronoun "she." So that's fun.
Jag kommenterar relationer med AI-partners och chattbottar i Västerbotten Kuriren.
Två forskare problematiserar AI-relationer • Riskerar att förstärka fördomar: ”Sorgligt att se hur män, som redan är isolerade, inte träffar tjejer eller killar på riktigt”