One is a cybernetic system. It has sensors, a controller, a decision system, goals, and actuators. Arguably it's alive, but I think the definition of cybernetics is sufficient because it's objective.
Over the past few years, especially in places like HN, many people have made many arguments that AI is different in this or that relevant way. It's perfectly reasonable to disagree with them, but the implication of this snarky comment is that nobody is making these arguments in the first place.
I can remember the 16-page _Newsweek_ ad quite vividly --- the Mac was something special, and even its spiritual successor, the NeXT Cube did not reach the level of artistic flair which the Mac hit as a quick perusal of:
It was an interesting design, well-suited to the target audience and presents quite well in person (a co-worker bought two, one donated, the other for his personal use when hiking).
That sounds like a solution looking for a problem though, i see plenty of arguments against throwing critical safety information that are in charge of peoples lives into an LLM "just in case the result is better than the result that the current battle-hardened systems already provide"
To properly test an LLM based emergency system against the current as-is system there needs to be a way of verifying whether the LLM detected emergency is classed as an emergency as-is. If this information was available publicaly it could enable bad actors things like stress-testing the EMP-tolerance of the current systems or what level of malware infiltration is detected.
Same with BuildCache, except you also get a fast local cache so you effectively have an L1 and an L2 cache.
In fact, since you also have super fast "direct mode" caching that bypasses the preprocessor (like ccache but unlike sccache), BuildCache really has three logical levels of cache: direct, preprocessor and remote (S3, redis, ...).
can you elaborate on your thesis as to why? it seems to me, with raw code being less of a bottleneck, things like understanding the spec, polishing, and doing the fuzzy work around the edges become all the more important. These were never strengths of outsourcing. In fact, I think that the fact that those parts are important is a big reason why the profession as a whole wasnt entirely just outsourced, despite the compelling economic reasons for it.
Isn't it the other way around, AI replacing outsourcing? AI can do the implementation work, but you still need the human who needs to specify what has to be done, give architecture guidance and check and accept the resulting work (or reject it, with notes on what to fix). AI coding is basically outsourcing to AI
This is the paradox. But because AI makes outsourcing jobs easier, those workers need to compete, and so they will be able to do those specification jobs and quality control jobs as well.
The paradox is that the quality of AI output is directly proportional to your expertise and understanding, while the trust/belief/confidence in their effectiveness is inversely proportional.
It will still pay to develop the core understanding. It is just that the world can remain irrational longer than you can remain solvent :).
I was rich even before I came into this field, my family owns lots of agriculture land and I came to this field for passion of it and was never really motivated by money.
Thing is AI is taking outsourced jobs in india at much faster rate than elsewhere.
The latest layoff coming from Oracle mostly laid off workers in india.
Monk fruit is quite expensive, so I'm afraid it will not become very popular in commercial products like candy bars and soft drinks. But for DIY it is certainly a nice option.
Can a search engine be ethical or safe?
Can an AI be ethical or safe?
If you answer differently for one or more of these questions, then you'll have to say why and where you draw the line.
reply