BLUE
Profile banner
NP
Nature Portfolio
@natureportfolio.bsky.social
Nature Portfolio’s high-quality products and services across the life, physical, chemical and applied sciences is dedicated to serving the scientific community.
2.4k followers83 following409 posts

Scaling up and shaping up large language models increased their tendency to provide sensible yet incorrect answers at difficulty levels humans cannot supervise, highlighting the need for a shift in AI design towards reliability, according to a paper in Nature. go.nature.com/4eCAnis 🧪

This is figure 1, which shows key indicators for several models in GPT (OpenAI), LLaMA (Meta) and BLOOM (BigScience) families.
7

Alishy.bsky.social

The question is, are they actually going to make any efforts toward reliability of output if humans can’t fact-check it anyway?

1
AHamyhoy.bsky.social

you can’t “shift” LLMs “towards reliability.” that’s like trying to “shift” a fisher price pull-along toy to a super car. they have superficial similarities but aren’t even in the same family of things. LLMs cannot be made to return “facts” only things that sound similar to things they’ve ingested

1
GWwhowhatnow56.bsky.social

Or, and hear me out here: there’s no need for this kind of AI and we shouldn’t keep going down this that’s boiling our oceans just so a couple companies and keep claiming stock growth

0
PTmrchompchomp.bsky.social

Reliability. Imagine that.

0

"a shift towards reliability" how tf am i the one living below the poverty line when an entire industry can peddle billion dollar bullshit where the fact it doesnt work is a feature not a bug

0
IHisobelhk.bsky.social

📌

0
Profile banner
NP
Nature Portfolio
@natureportfolio.bsky.social
Nature Portfolio’s high-quality products and services across the life, physical, chemical and applied sciences is dedicated to serving the scientific community.
2.4k followers83 following409 posts