Hacker Newsnew | past | comments | ask | show | jobs | submit | nperez's commentslogin

The HN audience is overwhelmingly aware of the issues around right to repair and data collection, so there isn't much reacting to do there - I assume there's already near-unanimous agreement that it's a good thing to educate people on, but we will have opinions on how to do it (or not do it) effectively.


Seems like a more organized way to do the equivalent of a folder full of md files + instructing the LLM to ls that folder and read the ones it needs


If so it would be most welcome since LLMs doesn't always consistently follow the folder full of MD files to the same depth and consistency.


what makes it more likely that claude would read these .md files then?


Skills is hopefully put through a deterministic process that is guaranteed to occur, instead of a non-deterministic one that can only ever be guaranteed to happen most of the time (the way it is now).


It is literally just injecting context into the prompt.


It includes both the file names and a configurable description string. That’s where you put the TLDR of when to use each skill.


This improves it a great deal but at a certain point, maybe 60-80% of the way it can start fading.


trained to


I feel like this sort of thing will be referenced for comic relief in future talks about hysteria at the dawn of the AI era.

The article actually contains the sentence "The machines aren’t just taking over our work—they’re taking over our minds." which reminds me more of Reefer Madness than an honest critique of modern tech.


Whether web or native is better is hardly relevant to the core of this issue IMO, which is about fundamental rights to admin our own devices. Having to make a network request to fetch an external resource every time you want to run code on your own device is sort of a non-solution to this problem.

For a while, I had stopped flashing custom ROMs because the default Android experience was good enough for me, but it looks like this is now necessary again.


And then you cannot use your phone for payments.


> And then you cannot use your phone for payments.

That's why you have a debit card. And if your bank won't give you a debit card, you find a better bank.


It's super convenient storing your payment card in your phone securely though and not having to carry around another card which can be stolen or lost at any time, it's nice simply taking your phone out and holding it to a reader than taking out an entire card, yet again another useful device functionality purposefully neutered under the guise of security.


one of my banks requires an app on my phone or desktop. the android app does not work on /e/OS, it needs an original google android system. the desktop app runs only on original windows (tested wine, it doesn't work) for any payment i make. only withdrawing cash from an ATM doesn't need 2FA.


You clearly need to switch to a bank that isn't user hostile.

Also we are obviously in dire need of legislation preventing such behavior.


the android app used to work until a few months ago, but they updated their system and forced everyone to switch.


Have you tried calling your bank advisor to see what they can recommend? (You do have a phone number to call, right?)


do you mean ask my bank what they recommend? other than getting a new phone or a windows computer, what exactly should they tell me?


Yes, I mean explaining to your bank advisor that you cannot run their app, and asking whether they can recommend any solutions other than switching banks. They must have a number of elderly customers, and they most certainly have something to propose to them.

It worked for me, in a European country with very high smartphone usage (you can pay on the bus by scanning a QR code). Twice.


Agreed 100%. When you work on an app every day, it all makes sense to see the cool features flash by, but you need to design for people who don't have a clue what your app does.


All that manic zooming... reminded me of Wayne's World:

Extreme close-up: https://www.youtube.com/watch?v=YdxsWw_gV3E


I'm not a modeler but I've tried it a few times. For me, modeling is a pain that I need to deal with to solo-dev a 3d game project. I would think about using something like this for small indie projects to output super low-poly base models, which I could then essentially use as a scaffold for my own finer adjustments. Saving time is better than generating high-poly masterpieces, for me at least.


Kind of fun to get into a brutal insult battle with. Hope I didn't violate any TOS with with that one.


Since they've been instructed to keep all logs, your social credit score might suffer.


No degree. I've been working the full stack for almost 15 years full time, including recently learning to train various types of gen AI models. There are still orgs that are rigid about their requirements, but I'm a mid-30s guy at an experience level where it doesn't make a whole lot of sense to overthink what I was learning 2 decades ago.


It's inevitable because it's here. LLMs aren't the "future" anymore, they're the present. They're unseating Google as the SOTA method of finding information on the internet. People have been trying to do that for decades. The future probably holds even bigger things, but even if it plateaus for a while, showing real ability to defeat traditional search is a crazy start and just one example.


It's ironic that you picked that example given that LLMs are simultaneously turning the internet into a vast ocean of useless AI generated garbage.

General web search will soon be a completely meaningless concept.


> They're unseating Google as the SOTA method of finding information on the internet.

Hardly. Google is at the frontier of these developments, and has enough resources to be a market leader. Trillion-dollar corporations have the best chances of reaping the benefits of this technology.

Besides, these tools can't be relied on as a source of factual information. Filtering spam and junk from web search results requires the same critical thinking as filtering LLM hallucinations and biases. The worst of both worlds is when "agents" summarize junk from the web.


Debating whether LLM is future is like debating whether online advertising is future. We've long, long passed that point. It's present, and it's not going to magically go away.

Is online advertising good for the society? Probably not.

Can you use ad blockers? Yes.

Can you avoid putting ads on your personal website? Yes.

All of these are irrelevant in the context of "inevitabilism." Online advertising happened. So did LLM.


I'm not going to disagree because greed knows no bounds, but that could be RIP for the enthusiast crowd's proprietary LLM use. We may not have cheap local open models that beat the SOTA, but is it possible to beat an ad-poisoned SOTA model on a consumer laptop? Maybe.


If future LLM patterns mimic the other business models, 80% of the prompt will be spent preventing ad recommendations and the agent would in turn reluctantly respond but suggest that it is malicious to ask for that.

I'm really looking forward to something like a GNU GPT that tries to be as factual, unbiased, libre and open-source as possible (possibly built/trained with Guix OS so we can ensure byte-for-byte reproducibility).


On the flip side, there could be a cottage industry churning out models of various strains and purities.

This will distress the big players who want an open field to make money from their own adulterated inferior product so home grown LLM will probably end up being outlawed or something.


Yes, the future is in making a plethora of hyper-specialized LLM's, not a sci-fi assistant monopoly.

E.g., I'm sure people will pay for an LLM that plays Magic the Gathering well. They don't need it to know about German poetry or Pokemon trivia.

This could probably done as LoRAs on top of existing generalist open-weight models. Envision running this locally and having hundreds of LLM "plugins", a la phone apps.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: