Does encoding the present compete with predicting the future? Across 3 studies, we find that encoding and prediction are coupled, not competitive! Proud of co-first authors Craig Poskanzer & Hannah Tarder-Stoll, along w/ @raheemajavid.bsky.socialosf.io/preprints/ps...
Would you be able to add me to the list?
This is worth looking into - my Denon amplifier has a severe delay when using a bluetooth connection. It makes editing audio quite miserable.
In this post, I propose a simple description of when pictures appear distorted, and how multiperspective photography creates distortion-free wide-angle pictures. This is part of my new perspective perception theory for art and photography. #visionscienceaaronhertzmann.com/2024/09/09/d...
A theory describing when picures look distorted.
I casually started watching Lecture 1, and before I knew it I was seven lectures in. Highly recommend.
For the new science refugees here, I invite you to slip into the warm embrace of 20+ hours of free soothing lectures on scientific methods, a stats course for people who hate statistics, would rather be doing research or planning revolutions. Try the 1st vid - it's not what you expect. #stats 🧪
Share your videos with friends, family, and the world
TechnoSphere--an online digital ecosystem simulation--was launched OTD in 1995. Users could create their own creatures and watch them interact. Although the character parameters were simple, spontaneous group behaviors such as herding emerged, and follow-on changes in predator strategies. 🌱🐋 🦋🦫🧪🌎
What an awesome resource!
We're thrilled to announce the release of the Visual Experience Dataset — over 200 hours of synchronized eye movement, odometry, and egocentric video data. This dataset will be a game changer for perception and cognition research. arxiv.org/abs/2404.18934 1/
We introduce the Visual Experience Dataset (VEDB), a compilation of over 240 hours of egocentric video combined with gaze- and head-tracking data that offers an unprecedented view of the visual...