You could also use headless selenium under the hood and pipe to the model the entire Dom of the document after the JavaScript was loaded. Of course it would make it much slower but also would amend the main worry people have which is many websites will flat out not show anything in the initial GET request.
Because if we're unlucky, Scott will think in the final seconds of his life as he watches the world burn "I could have tried harder and worried less about my reputation".
I don't think it's a matter of being worried about reputation. Making credible predictions and rigorous analysis is important in all scenarios. If superintelligence really strikes in 2027, I feel like AI 2027 would be right only by coincidence, and would probably only have detracted from safety engineering efforts in the process.
Despite many years of development, I find lsp and eglot to me mostly unusably slow. I need my emacs to be fast and the only way to achieve that is something oldschool like Jedi/Elpy for python.
Yikes. I'm going to follow this one cause it's right up my alley, but I'm worried I will absolutely hate the process if some standards don't change, e.g. having to have multiple functions called "get()" for them to be a GET request is going to drive mypy/flake8 mad.
You can just use `@app.get` and name your function whatever you like, just like FastAPI, if you prefer.
Although I don't see why flake8 should care - multi-dispatch is built into the python stdlib so having multiple functions with the same name is not weird or new.
Thanks for the info. In general, being compliant with established conventions (even if you don't personally like them) can lower the barrier of entry for some people who may superficially reject your library based on esthetic concerns.
If you'd like to dig deeper, the reference is:
F811 redefinition of unused 'get' from line xx
from flake8 and
error: Name "get" already defined on line xx [no-redef]
Thank you for working on gptel, it's an excellent package. I'm still using the copilot more because of the pure speed (competing with company mode/LSP), but I never use it if it suggests more than one line. The quality is just not there. But having access to gpt4 from gptel has been very useful. Can't wait to play around with Claude 3.
The weird thing here is that the only change is making the denominator ln(exp(x)) instead of x. Catastrophic cancellation is still happening in the numerator (it’s still exp(x)-1), and the denominator winds up being some really tiny number.
It’s just that, due to the quirks of the floating point calculations involved, the numerator and denominator wind up being nearly the same noisy approximation to x, whereas in the original calculation that wasn’t true.
That's not what the post says if I understand correctly - the post explains why in certain situations the "noise" disappears, and in other cases it doesn't.
Many libm implementations don't have an accurate `log` or `exp` routine, so there does exist a risk. (Of course, it's also true that many of them also special-case `log(x) ~= x - 1` and `exp(x) ~= x + 1` for small enough `x`.)