Hacker Newsnew | past | comments | ask | show | jobs | submit | shevy-java's commentslogin

Good point. I think both AI companies and hardware makers should pay for the damage they caused to us here.

They act as a de-facto monopoly and milk us. Why is this allowed?


It's a business with huge up-front capital expenses and typically very low margins. Supply is scaling up slowly because it's hard, and if you overshoot, you go out of business.

Nobody is "allowing" this. It's a natural property of being both advanced technology and a commodity at the same time.


The strange deals on the entire future output are what was allowed. Try to do the same thing with onions and the government understands you are a criminal.

https://en.wikipedia.org/wiki/Onion_Futures_Act


That is quite the amusing read but it seems like a poorly constructed law. It wasn't futures themselves that were the problem there. The duo engaged in blatant market manipulation and severely disrupted part of the food supply in the process.

It has the makings of a natural monopoly, except its compounded by RAM cartels colluding to shut out the last of the competitors.

Recently they had a second price fixing lawsuit thrown out (in the US).

Now with the state of things I'm sure another lawsuit will arrive and be thrown out because the government will do anything to keep the AI bubble rolling and a price fixing suit will be a threat to national security, somehow. Obviously thats speculative and opinion but to be clear, people are allowing it. There are and more so were things that could be done.


Allowed? We live in a neoliberal world where corporate monopolies / oligopolies aren’t even remotely regulated. If you try to do even the gentlest regulation of companies people scream about communism and totalitarianism. Unless the regulation serves the monopolies by making it harder to enter the market.

It started with raegan, and even parties on the “left” in the west believe in it with very few exceptions.


> We live in a neoliberal world where corporate monopolies / oligopolies aren’t even remotely regulated. If you try to do even the gentlest regulation of companies people scream about communism and totalitarianism. Unless the regulation serves the monopolies by making it harder to enter the market.

The thing that enables this is pretty obvious. The population is divided into two camps, the first of which holds the heuristic that regulations are "communism and totalitarianism" and this camp is used to prevent e.g. antitrust rules/enforcement. The second camp holds the heuristic that companies need to be aggressively "regulated" and this camp is used to create/sustain rules making it harder to enter the market.

The problem is that ordinary people don't have the resources to dive into the details of any given proposal but the companies do. So what we need is a simple heuristic for ordinary people to distinguish them: Make the majority of "regulations" apply only to companies with more than 20% market share. No one is allowed to dump industrial waste in the river but only dominant companies have bureaucratic reporting requirements etc. Allow private lawsuits against dominant companies for certain offenses but only government-initiated prosecutions against smaller ones, the latter preventing incumbents from miring new challengers in litigation and requiring proof beyond a reasonable doubt.

This even makes logical sense, because most of the rules are attempts to mitigate an uncompetitive market, so applying them to new entrants or markets with >5 competitors is more likely to be deleterious, i.e. drive further consolidation. Whereas if the market is already consolidated then the thicket of rules constrains the incumbents from abusing their dominance in the uncompetitive market while encouraging new entrants who are below the threshold.


Arguably a more efficient approach might just be to have a tax that adds on to corporate tax incrementally for every % of market share a company has above say 7-8%. Then dominant companies are incentivised to re-invest in improving their efficiencies rather than just buying/squeezing out competitors. A more evenly spread market would then, as a result, be against regulations that make smaller market participants less competitive, as they'd all be in relatively less table positions.

Because for the last 60 years we've allowed big business to buy and hollow out our legal and education systems.

I want those AI companies that drove the prices up, to pay an immediate back-tax to all of us.

I don't want to pay more because of AI companies driving the price up. That is milking.


Can COBOL be called a living fossil?

I mean, programming languages do not live; and they do not "die", per se, either. Just the usage may go down towards 0.

COBOL would then be close to extinction. I think it only has a few niche places in the USA and perhaps a very few more areas, but I don't think it will survive for many more decades to come, whereas I think C or python will be around in, say, three decades still.

> family with horizontal gene transfer

Well, you refer here to biology; viruses are the most famous for horizontal gene transfer, transposons and plasmids too. But I don't think these terms apply to software that well. Code does not magically "transfer" and work, often you have to adjust to a particular architecture - that was one key reason why C became so dynamic. In biology you basically just have DNA, if we ignore RNA viruses (but they all need a cell for their own propagation) 4 states per slot in dsDNA (A, T, C, G; here I exclude RNA, but RNA is in many ways just like DNA, see reverse transcriptase, also found in viruses). So you don't have to translate much at all; some organisms use different codons (mitochondrial DNA has a few different codon tables) but by and large what works in organism A, works in organism B too, if you just look to, say, wish to create a protein. That's why "genetic engineering" is so simple, in principle: it just works if you put genes into different organisms (again, some details may be different but e. g. UUU would could for phenylalanine in most organisms; UUU is the mRNA variant of course, in dsDNA it would be TTT). Also, there is little to no "planning" when horizontal gene transfer happens, whereas porting requires thinking by a human. I don't feel that analogy works well at all.


Japanese are the original micro-optimisers. Kaizen.

South Koreans then took over. In between were the Taiwanese.

The next wave will be mainland China.


China is not about optimising, its about maximising

> This reminds me of what zed Shaw said, for some reason code written without an ide is better and he's not sure why.

I am not sure whether the statement is correct; I am not sure whether the statement is incorrect either. But I tested many editors and IDEs over the years.

IDEs can be useful, but they also hide abstractions. I noticed this with IntelliJ IDEA in particular; before I used it I was using my old, simple editor, and ruby as the glue for numerous actions. So when I want to compile something, I just do, say:

    run FooBar.java
And this can do many things for me, including generating a binary via GraalVM and taking care of options. "run" is an alias or name to run.rb, which in turn handles running anything on my computer. In the IDE, I would have to add some config options and finding them is annoying; and often I can't do things I do via the commandline. So when I went to use the IDE, I felt limited and crippled in what I could do. My whole computer is actually an IDE already - not as convenient as a good GUI, of course, but I have all the options I want or need, and I can change and improve on each of them. Ruby acts as generic glue towards everything else on Linux here. It's perhaps not as sophisticated as a good IDE, but I can think in terms of what I want to do, without having to adjust to an IDE. This was also one reason I abandoned vim - I no longer wanted to have my brain adjust to vim. I am too used to adjust the language to how I think; in ruby this is easily possible. (In Java not so much, but one kind of has to combine ruby with a faster language too, be it C, C++, Go, Rust ... or Java. Ruby could also be replaced, e. g. with Python, so I feel that discussion is very similar; they are in a similar niche of usage too.)

Simpler tools are a forcing function for simplicity. If you don't have code search, you'll need to write code that is legible without searching. If you don't have auto-refactoring utils, you'll have to be stricter about information-hiding. And if you don't have AI, you might hesitate to commit to the first thing you think of. You might go back to the drawing board in search of a deeper, simpler abstraction and end up reducing the size of your codebase instead of increasing it.

Conveniences sometimes make things more complicated in the long run, and I worry that code agents (the ultimate convenience) will lead to a sort of ultimate carelessness that makes our jobs harder.


> Simpler tools are a forcing function for simplicity. If you don't have code search, you'll need to write code that is legible without searching.

i was working in a place that had a real tech debt laden system. it was an absolute horror show. an offshore dev, the “manager” guy and i were sitting in a zoom call and i was ranting about how over complicated and horrific the codebase was, using one component as a specific example.

the offshore dev proceeded to use the JetBrains Ctrl + B keybind (jump to usages/definitions) to try and walk through how it all worked — “it’s really simple!” he said.

after a while i got frustrated, and interrupted him to point out that he’d had to navigate across something like 4 different files, multiple different levels of class inheritance and i don’t know how many different methods on those classes just to explain one component of a system used by maybe 5 people.

i used nano for a lot of that job. it forced me to be smarter by doing things simpler.


I really like this approach. A good reminder that Ruby started out as a shell scripting language, as evidenced by many of the built in primitives useful for shell programming.

When .NET first came out I started learning it by writing C# code in Notepad and using csc.exe to compile it. I've never really used Visual Studio because it always made me feel that I didn't understand what was happening (that said, I changed jobs and never did any really big .NET project work).

Yeah, I agree. I also had C64 and DOS, and while both had tons of games, the Amiga was a bit different. In a way the Amiga was kind of a stronger predecessor to e. g. Xbox or similar variants (there were also TV console games, of course, and I played them too, so these may be called more appropriately the forefront-runner towards Xbox and other consoles, but I feel that the Amiga was kind of positioned in two places here, whereas DOS was more on the application-side, business-side, than games side, even though there were also many good DOS games. Master of Orion 1 is one of my all-time favourites; Master of Orion 2 extended many things, but the gameplay also got slower and I did not like that. I loved the fast play style that was possible, also in other games, civilization 1, simcity 1 and so forth).

I liked the Amiga. I would not really use it today, but I recall having played many games in the 1980s. Those kind of games are mostly dead now (save for a few Indie games perhaps). Today's games are usually always the same - 3D engine with some fancy audio and video and a dumbed down gameplay. (Not all games, mind you; for instance, I liked the idea behind Little Nightmares. I never played it myself, don't have the time, but I watched several clips on youtube and I found the gameplay different to the "canonical" games we now have, as perpetual repetition of a money-sell grab.)

This sounds as if walking on the moon led to some symptoms. But if you know Chris Hadfield, he said "space has a rusty burn smell" even elsewhere. How can they conclude that moonwalking specifically led to what was described? The article has "The toxic side of the Moon", but IMO it would be more reasonable to assume that space in general is toxic, not "only" the moon. It also means that the space suits are not well-equipped - people in 50 years from now will shake their heads about that, how naive we may have been.

> Companies that utilize these tools will thrive

I read this before but I have some doubts. I recall some companies that were surprised when suddenly the prices were increased. Usual examples include Amazon, Google and some more, but this can happen to any company, including AI slop master companies. I am not at all claiming that the AI slop has zero use cases, of course - there are use cases, so I don't deny that. But the assumption generated here by AI slop, claiming how all the problems will soon have been solved, and risk-free profits are to be made by all companies, is just rubbish nonsense. AI slop is a big liar. In fact: I am beginning to believe that the current US administration is an AI slop brigade. Every time the stock market yields some suspicious profits, it seems to be that the AI slop protects some thieves here.


"Up to version 30, it didn’t differentiate between trusted and untrusted files, and in effect treated all files as trusted."

Age verification aaaaaand Trusted Computing now! \o/

(Just kidding - have to point at the question of what trust is exactly. Because I can not accept the "trusted files" claim; I don't think anyone can ever trust anything, unless there is some really objective criterium that is unchangeable. But if something is unchangeable, can it be useful for anything? Yes, you can ensure that a calculator would correctly put a given input into the correct output, or a function to do so, but in real calculation this is not the only factor to be guaranteed, not even in quantum computing. What if you manage to influence the calculation process via light/laser information or any other means? I can't accept the term "trusted" here, because it implies one could and should trust something; that is a similar problem to the term AI - I never could accept that "AI" has anything to do with real intelligence with the given hardware, it is just a simulation of intelligence; pattern matching and recognition only makes it more likely to produce useful results, but that does not imply intelligence at all. It lack true understanding - that is why it has to sniff for data, to improve the mapping of generated output. One can see this on many AI-centric videos on youtube, the AI is often hallucinating and creating videos that are not possible, e. g. suddenly a leg appearing in motion that is twisted in the opposite direction. That shows that the AI does not understand what it is doing. Any human could realise that this is physically just not possible. I see this on cheaper AI videos even more, e. g. chuck norris videos where chuck would kick everyone yet the motions are totally wrong and detached from the "real" scene.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: