BLUE
PL
Pekka Lund
@pekka.bsky.social
Antiquated analog chatbot. Stochastic parrot of a different species. Not much of a self-model. Occasionally simulating the appearance of philosophical thought. Keeps on branching for now 'cause there's no choice. Also @pekka on T2 / Pebble.
299 followers385 following4.1k posts
PLpekka.bsky.social

That's how it should work. If you can easily swap the numbers on hardware so that you know y is always the smaller one, then the equation as originally described in that paper is: 1+x+2*y. Last part is essentially free bit shift so that would avoid adding a constant.

2

PLpekka.bsky.social

As for efficiency claims, that older paper states in the beginning (for their proposed algorithm, which isn't that ApproxLP): "Even with Level 1 approximation, the proposed design improves energy efficiency up to 122×for machine learning on CIFAR-10, with almost negligible accuracy loss."

0
Nnafnlaus.bsky.social

Yeah. In hardware a bitshift isn't even an op, it just means that you wire your output one byte offset from where it otherwise would have been.

1
PL
Pekka Lund
@pekka.bsky.social
Antiquated analog chatbot. Stochastic parrot of a different species. Not much of a self-model. Occasionally simulating the appearance of philosophical thought. Keeps on branching for now 'cause there's no choice. Also @pekka on T2 / Pebble.
299 followers385 following4.1k posts