BLUE
D
Dorion
@noirod.bsky.social
Retired techie living it up in the PNW. Rescuer of English Bull Terriers, the big goofs.
29 followers107 following18 posts
Dnoirod.bsky.social

Do you use a good observability framework so you see where the slowdown is? I always built my own but there must be good ones out there. There used to be a good python profiler too.

1

aendra.com

I'm pretty sure I know where it is, when the queuing server gets a req from the firehose consumer it creates a worker which instantiates Pytorch and loads the ML model, and the way it spawns means that it loads model from disk every time. I have no idea how to fix that, I am not good at Python lol.

1
D
Dorion
@noirod.bsky.social
Retired techie living it up in the PNW. Rescuer of English Bull Terriers, the big goofs.
29 followers107 following18 posts