Hacker Newsnew | past | comments | ask | show | jobs | submit | davidbjaffe's commentslogin

Yes. I keep mine on my lap. My regimen is that I wake up at 3am and lie on the couch for several hours with coffee and write code (or these days, ask "someone else" to). It is highly productive and enjoyable and breaks all the rules and no I do not have RSI. Long ago I started sandpapering the edges because yeah otherwise it hurts my wrists.

It's a good move. I have a case on my MBP that helps with this because it means the edges are plastic for me, and not quite so sharp.

If you want to break more rules, you might consider chickenwing-ing your arms a bit. Deviate from the homerow and learn to feel your way around at other angles. Then you can hold the laptop closer to you without putting your wrists at a weird angle (though you may have to use a non-thumb finger for spacebar, as I do).

As I type this, my laptop is partly on my belly and partly on my chest, and my wrists are so far out to the sides that they completely miss the front edge of the laptop altogether. The angle is pretty favorable, too: my palms rest on the laptop on either side of the trackpad, and my wrists rest over the left and right sides of the bottom case but have little to no pressure on them.

No RSI here, either. Just make sure you're loose and comfortable and not forcing anything! That seems to help a lot.


Cool! For GLM-OCR, do you use "Option 2: Self-host with vLLM / SGLang" and in that case, am I correct that there is no internet connection involved and hence connection timeouts would be avoided entirely?


When you self-host, there's still a client/server relationship between your self-hosted inference server and the client that manages the processing of individual pages. You can get timeouts depending on the configured timeouts, the speed of your inference server, and the complexity of the pages you're processing. But you can let the client retry and/or raise the initial timeout limit if you keep running into timeouts.

That said, this is already a small and fast model when hosted via MLX on macOS. If you run the inference server with a recent NVidia GPU and vLLM on Linux it should be significantly faster. The big advantage with vLLM for OCR models is its continuous batching capability. Using other OCR models that I couldn't self-host on macOS, like DeepSeek 2 OCR or Chandra 2, vLLM gave dramatic throughput improvements on big documents via continuous batching if I process 8-10 pages at a time. This is with a single 4090 GPU.


When I was in junior high school and high school, I would hang out at the Purdue University chess club. He was a regular, prone to laughter, a funny guy. We would play double speed chess (which we called "p'dorky") and other silliness. I had no idea he went on to do the cool things that he did.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: