BLUE
Profile banner
C
Chris
@chrissmallz.com
Startup, Software & Skincare guy 🏀🏎️🇨🇦 facultyofskin.com
28 followers192 following72 posts
Cchrissmallz.com

🍂

0
Cchrissmallz.com

I mostly agree, but if leading a team to a championship is a credential for retiring one’s jersey then we might as well retire Kawhi’s too. I see your point tho

1
Cchrissmallz.com

I have used Google Cloud for most of my professional career (just a few years into it). I think it's a great and simple cloud service. Now I'm working with Azure and it's the most confusing service I think I've ever worked with. Does anyone else agree or is it just me?

0
Cchrissmallz.com

Definitely read this thinking of a computer server, very confusing from that perspective

0
Reposted by Chris
Bbmann.ca

The Personal Data Server (PDS) in #ATProtocol stores structured data on behalf of users. Users delegate to a PDS, and can change it later. Kind of like choosing GitHub for git hosting, running it yourself, etc The user approves apps to write to the PDS.

3
Cchrissmallz.com

Yea it seems like breaks are common theme to prevent burnout, seems kinda obvious now that I’m thinking about it lol

0
Cchrissmallz.com

Wow, that’s some valuable insight! I think I have been trying to figure out what the culture at my startup should be once it matures a little. This helps a ton and aligns with my own values. Have you never experienced any burnout in your career then? Seems to good to be true haha

1
Reposted by Chris
ENemilynordmann.bsky.social

Finally read the full paper and I'm on board. Calling AI errors hallucinations suggests they have perception and that something goes wrong when in fact, the process is the same whether the output is meaningful or not. Better to refer to it as bullshit - no regard for the truth. #AcademicSky

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
7
Cchrissmallz.com

I’m just starting out in my career, I’ve co-founded a startup and am working in a rehabilitation laboratory at a University. So far I’m loving what I do and don’t have a hint of burnout. Do most people just hit it like a brick wall? Seems terrifying tbh

2
Profile banner
C
Chris
@chrissmallz.com
Startup, Software & Skincare guy 🏀🏎️🇨🇦 facultyofskin.com
28 followers192 following72 posts