I believe that low power = cheaper tokens = more affordable and sustainable, to me this is what a consumer will benefit from overall. Power hungry GPUs seem to sit better in research, commerce, and enterprise.
The Nvidia killer would be chips and memory that are affordable enough to run a good enough model on a personal device, like a smartphone.
I think the future of this tech, if the general populace buys into LLMs being useful enough to pay a small premium for the device, is personal models that by their nature provide privacy. The amount of personal information folks unload on ChatGPT and the like is astounding. AI virtual girlfriend apps frequently get fed the most darkest kinks, vulnerable admissions, and maybe even incriminating conversations, according to Redditors that are addicted to these things. This is all given away to no-name companies that stand up apps on the app store.
Google even states that if you turn Gemini history on then they will be able to review anything you talk about.
For complex token prediction that requires a bigger model the personal could switch to consulting a cloud LLM, but privacy really needs to be ensured for consumers.
I don't believe we need cutting edge reasoning, or party trick LLMs for day to day personal assistance, chat, or information discovery.
Don't coding LLMs kind of fill this gap? I can't imagine anyone who isn't a pro wanting to spend time learning HTML when they can just describe what they want in plain text and get something good enough.
I don't really understand why the author needed to use all those words to suggest t-shirt sizes instead of story points.
All the stuff about breaking down tasks, watching a backlog queue to monitor cadence, and have regular meetings is already happening with, or without story points.
People overthink this stuff all the time. Every team should figure out what works best for them, even down to project by project. Getting shit done isn't hard to monitor. You have a bucket of well defined tasks, have sprint meetings, look for blockers and assumptions, watch work flowing through. It's not really that difficult. Whether you use story points or some other estimation tool is really just an exercise of calibration, it's not gospel. The estimation process is the important thing, discuss as a team, agree on complexity, make sure the task is bite sized etc.
There is no way they can go back to engineering leadership. An engineer focussed CEO would do some spring cleaning and the board isn't going to vote someone like that in. It's more of the same but this time they'll pinky promise to do better QA. That said, one more failure and if their commercial wing isn't already done, it would be.
Reddit is a cesspit of its former self. The OG reddit is far from what it used to be. It's no longer an innocent link sharing platform, it's a socio-political platform with so much slime in between the subs that I find useful. Try and use Reddit daily and ignore all the drama, political shit, and covert rage click posts. It's emotionally exhausting not to get pulled in and have to constantly triage your home feed.
I have deleted my account at least half a dozen times and just tried to use it as a source of useful information, but I inevitably fall into the pit of getting another account because I can't control myself objecting to the nasty stuff on there. I know this is partly my self control problem, but social media in general is just awful. I can get rid of FB easily, there is literally no benefit for me being on there, but Reddit legit has useful content about all of my hobbies and how to do X.
It is unkillable but for some pretty rotten reasons. It's a social media platform mixed in with really useful content. Come for the search result, stay for the drama is what happens to me.
One thing that happened was the anti-tracking movement provided a better interface. They have their own comment sections that are somewhat better than the actual site. Bifurcation ensues.
It's all sorts of extremism, for sure. I found myself getting pulled into some of it over the Gaza war. It took me a while to realize what was happening to me, it was making me miserable, angry, anxious. I reflected on what was happening to my mental health, my reactivity, believing things without checking them before I responded. Similar on YouTube.
Loads of pro-Russian stuff that started to slip in, if I watched even one they started to accumulate and take over from what I used to watch for enjoyment. I then noticed these American pundits were using hijacked accounts to game the algo and get reach. Like one account was a vietnamese women's fashion account 2 years ago, now it's Americans talking about how Putin is definitely going to win the war very soon.
One day someone is going to write a history book about the 2020s propaganda and how technology was used as a psy-op, or whatever is going on. I didn't believe this was a real thing until I started questioning my own thoughts.
Yeah, this is to be expected with early adoption. This stuff comes out of the lab and it's not perfect. The key thing to evaluate is the trajectory and pace of development. Much of what folks challenged ChatGPT with a year ago is long lost in the dust. Go look at stable diffusion this time last year. Dall-E couldn't do words and hands, it nails that 90% of the time in my experience today.
About words, Dall-e is nor even close to nail it 90% of the time. Not even 50%. Maybe they nerf it when you request a logo from it, but that was my experience in the last few days.
What are you getting at when you say "secretive" injections? Isn't this stuff basically how any AI business shapes their public GPTs? I don't even know what a LLM looks like without the primary prompts tuning it's attitude and biases. Can you run a GPT responsibly without making some discretional directions for it? And being secretive, isn't that reasonable for a corporation - how they tune their LLM is surely valuable IP.
And this is just addressing corporations, people running their own LLMs are the bigger problem. They have zero accountability and almost the same tools as the big players.
I must be misunderstanding what these prompts are used for.
I agree, but how will this device ever get beyond home entertainment or light office work if it's always going to be a big screen strapped to your face, not to mention the social aspect. I dread to live in a world where people are walking around self absorbed prodding and poking at things no one else can see. It just seems so selfish and antisocial.