Hacker Newsnew | past | comments | ask | show | jobs | submit | 098799's commentslogin

You could also use headless selenium under the hood and pipe to the model the entire Dom of the document after the JavaScript was loaded. Of course it would make it much slower but also would amend the main worry people have which is many websites will flat out not show anything in the initial GET request.


can you flesh this out a tiny bit? because for indy-crawlers the javascript rendering is the main problem.


Here's a sketch: https://chatgpt.com/share/68640b97-9a48-8007-a27c-fdf85ff412... -- selenium drives your actual browser under the hood.


Because if we're unlucky, Scott will think in the final seconds of his life as he watches the world burn "I could have tried harder and worried less about my reputation".


I don't think it's a matter of being worried about reputation. Making credible predictions and rigorous analysis is important in all scenarios. If superintelligence really strikes in 2027, I feel like AI 2027 would be right only by coincidence, and would probably only have detracted from safety engineering efforts in the process.


Despite many years of development, I find lsp and eglot to me mostly unusably slow. I need my emacs to be fast and the only way to achieve that is something oldschool like Jedi/Elpy for python.


Yikes. I'm going to follow this one cause it's right up my alley, but I'm worried I will absolutely hate the process if some standards don't change, e.g. having to have multiple functions called "get()" for them to be a GET request is going to drive mypy/flake8 mad.


You can just use `@app.get` and name your function whatever you like, just like FastAPI, if you prefer.

Although I don't see why flake8 should care - multi-dispatch is built into the python stdlib so having multiple functions with the same name is not weird or new.


Thanks for the info. In general, being compliant with established conventions (even if you don't personally like them) can lower the barrier of entry for some people who may superficially reject your library based on esthetic concerns.

If you'd like to dig deeper, the reference is:

    F811 redefinition of unused 'get' from line xx 
from flake8 and

    error: Name "get" already defined on line xx  [no-redef]
from mypy.


But you would have these 'get' functions in different modules though, so how would it be a 'redefinition'?


One might suggest adding:

  try:
      from wat import wat
  except ImportError:
      pass
to your $PYTHONSTARTUP file to avoid the cumbersome import.


You can even add the inline importer in base64 that is pretty neat.

But I eventually printed that output, and put it in a dir I point my PYTHONPATH to so that I will always have it available.

Let's see if it sticks.


You do have to give them the company name though (however inconsequential that is)


You can make something up. I don't' have a name yet


taxes...


Emacs implementation when? ;)


Just added it to gptel. (No image support though, it's a text-only LLM client.)


Thank you for working on gptel, it's an excellent package. I'm still using the copilot more because of the pure speed (competing with company mode/LSP), but I never use it if it suggests more than one line. The quality is just not there. But having access to gpt4 from gptel has been very useful. Can't wait to play around with Claude 3.


Fantastic work! I'm a huge fan of `gptel` and hope to contribute when I can.

Thank you again for the great tool.


Wow, this was fast. Excellent!


I just checked - surprisingly I cannot find any Emacs AI implementation that supports Claude's API.


Just added it to gptel.


If you use Emacs you're expected to know your way around programming and not need copilots :)


You have not checked GPTel then. It is super useful! Emacs really makes a good pairing with LLMs.


Trying to subscribe to pro but website keeps loading (404 to stripe's /invoices is the only non 2xx I see)


Actually, I also noticed 400 to consumer_pricing with response "Invalid country" even though I'm in Switzerland, which should be supported?


Claude.ai is not currently available in the EU...we should have prevented you from signing up in the first place though (unless you're using a VPN...)

Sorry about that, we really want to expand availability and are working to do so.


Switzerland is not in the EU. Didn't use VPN.


Probably more coming soon given he just left openai to pursue other things.


Nothing weird about it. It should be obvious that subtracting two floats that are very close to each other results in a loss of numerical precision:

1.000000003456e0 - 1.000000002345e0 = 0.000000001111e0 = 1.111numericalnoise e-9

It's exactly the same issue here. `math.exp(1e-15)` is `1.000000000000001`. If you subtract 1, you get 1 significant digit and numerical noise.


The weird thing here is that the only change is making the denominator ln(exp(x)) instead of x. Catastrophic cancellation is still happening in the numerator (it’s still exp(x)-1), and the denominator winds up being some really tiny number.

It’s just that, due to the quirks of the floating point calculations involved, the numerator and denominator wind up being nearly the same noisy approximation to x, whereas in the original calculation that wasn’t true.


That's not what the post says if I understand correctly - the post explains why in certain situations the "noise" disappears, and in other cases it doesn't.

See comparison between f and g functions.


I see! yes, the magic is you can cancel the noise by repeating it twice:

``` In [1]: math.exp(1e-15)-1 Out[1]: 1.1102230246251565e-15

In [2]: math.log(math.exp(1e-15)) Out[2]: 1.110223024625156e-15 ```

risky business though, I imagine it's implementation dependent


Agreed, this is risky business. The intermediate values still need to fit into floats and are still losing precision.

From the article:

    g(1e-9)  returns 1.0000000005,
    g(1e-12) returns 1.0000000000005,
    g(1e-15) returns 1.0000000000000004
but... g(1e-16) throws ZeroDivisionError: float division by zero.


It’s not (or shouldn’t be), it’s simply a result of math, as the article explains in length.


Many libm implementations don't have an accurate `log` or `exp` routine, so there does exist a risk. (Of course, it's also true that many of them also special-case `log(x) ~= x - 1` and `exp(x) ~= x + 1` for small enough `x`.)


The math hinges on that there is the same error for exp(x) at both places. So as long as exp(x) is deterministic then this should be alright.


I don't know of any libm that have log or exp sufficiently inaccurate for this to break. Do you?


Indeed, any well-known enough libm wouldn't do that. But I can imagine some less-known libms with wild error bounds.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: