They could VERY EASILY justify giving Trumpâs proposal to violently remove millions or execute The Purge the Hillary email treatment, but they donât. And wonât.
Our Rube Goldberg system for electing a President isn't called the "Electoral College" in the Constitution or in federal statues. Alexander Keyssar discusses the term's evolution in an appendix to his book, "Why Do We Still Have the Electoral College?" đď¸
In much of the US, itâs the last week to register to vote in the US election in November. Do not miss your opportunity (and if you are registered, to check your registration to make sure it is still valid). Get registered and encourage everyone you know to register. Hereâs where to start: Vote.gov
Find the information you need to make registration and voting easy. Official voter registration website of the United States government.
I am đ at this language. While the court resisted outright acknowledging forced labor as involuntary servitude, here it is walking riiigghht up to the line of making a 13th Amendment argument for abortion rights, which I would be absolutely here for !!
SCOTUS claiming the power to legalize bribery is really underrated as an all-time lawless ârulingâ
Jesus Christ, I just want morning coffee not a voyage to a mystic latte brotherhood on a distant mountaintop
Can we please stop calling everything in the world a "journey" now? I just got a new milk frother for coffee and the instruction book cover says in huge letters: Start Your Barista Creator Journey Here
Look I know Iâm not a climate scientist, but this seems bad.
you canât âshiftâ LLMs âtowards reliability.â thatâs like trying to âshiftâ a fisher price pull-along toy to a super car. they have superficial similarities but arenât even in the same family of things. LLMs cannot be made to return âfactsâ only things that sound similar to things theyâve ingested
More simply put: The larger and more trainable the language model, the more bullshit it produces. Because, and i guess I'll keep saying it until it sinks in, making up statistically plausible bullshit is literally what LLM/GPT systems *inherently* do. Nice to have another paper to cite for it, tho.