Yeah, it is an interesting bubble to be in. I worked with a company that could not keep up with the rising SWE salaries and thus attracted a different kind of SWE. I definitely felt the difference in education with the new hires. Reading comprehension/attention was weak. AI will easily replace them, I guess.
Finding the data on this would be convenient but its still unclear to me. I'm not a fan of how that article from NU cites its sources loosely, including lazily citing Wikipedia.
>I worked with a company that could not keep up with the rising SWE salaries and thus attracted a different kind of SWE.
Maybe im misunderstanding you but I would think that any level of SWE skill would be a minimum amount of competence such that they wouldn't fall for Trumps tricks? SWE is rearranging bits accordance to logic...so you need to know logic no?
Oh, I WISH that was the case but I'd estimate only 10% of SWE would fit your model of minimum competence... and yeah a lot of that 10% are browsing HN. I recall in 2016 asking coworkers why they voted trump. "My 401k" was a frequent answer.
Vibe coding existed long before AI, especially in web/startup/enterprise information systems. You don't need to be a critical thinker to make a successful RoR app.
Let us all reflect on AI with a core point this article is trying to make: that we build habits around a product. The industry goal is to have our dependency. What a fabulous position to be in where we can’t think or code without a subscription their LLM assets.
A tiresome sysadmin I've been talking to is under the impression of: "well, if <open-core-saas> stagnates or otherwise shifts focus away from our interests then someone will just fork it, duh!" .. when glancing thru Discord successors
It seems to have originated in the US with Fire Departments:
> These reports show that a dry run in the jargon of the fire service at this period [1880s–1890s] was one that didn’t involve the use of water, as opposed to a wet run that did.
Interestingly the one place I have seen "dry run" to actually mean "dry run" is using a air compressor to check to see if a water loop (in a computer) doesn't leak by seeing if there no drop in pressure.
I have seen organizers get stuck in the dopamine loop of focusing on inspiring content that "increases engagement" and getting fixated on moderating trolls that it actually gets in the way of doing impactful work. I definitely on the depression train on this front. It's far worse than digital versions of flyers, people aren't incentivized to focus show up when they can just keep scrolling for their fix.
I "trade" my content in kind -- garbage in, garbage out style -- combining my short form renders on my commute with songs I think will match the rhythm then publish as a music video.
And I'm not chasing clicks, likes, nor monetization on that platform; I was fortunate to ignore FB's SSO with IG as I deleted that account a decade or longer ago.
I concur and "apolitical" is probably not the best word. I think it is an attempt to convey that the platform can't ban people. It is resistant from infrastructure censorship. Here is an example specific use case:
I think the sort of person who would paste/type the URL for another social service into the freeform "or some other service" input on ShareOpenly is exactly the sort of person who has that web literacy, though. Which I guess doesn't support my "delete them all!" desire, but rather sadly reinforces keeping the status quo.
Is there a spreadsheet out there benchmarking local LLM and hardware configs? I want to know if I should even bother with my coffeelake xeon server or if it is something to consider for my next gaming rig.
Its really not hard to test with llamafile or ollama, especially with smaller 7B models. Just have a go.
There are a bazzillion and one hardware combinations where even RAM timings can make a difference. Offloading a small portion to a GPU can make a HUGE difference. Some engines have been optimized to run on Pascal with CUDA compute below 7.0, and some have tricks for newer gen cards with modern CUDA. Some engines only run on Linux while others are completely x-platform. It is truly the wild-west of combinatorics as they relate to hardware and software. It is bewildering to say the least.
In other words, there is no clear "best" outside of a DGX and Linux software stack. The only way to know anything right now is to test and optimize for what you want to accomplish by running a local llm.
Thanks for shedding some light on this element of the story. I like to keep references like this when mustering support for action.
What does bipartisan even mean? I've seen my state lose a republican congressperson who, while I disagreed with, called out trump on disrespecting branches of gov during first term and since been replaced with a pro-trump congressperson. The checks and balances are eroding and the citizen response has to be strong.
Finding the data on this would be convenient but its still unclear to me. I'm not a fan of how that article from NU cites its sources loosely, including lazily citing Wikipedia.