Apple sells consumer goods first and foremost. They likely don't see a return on investment through increased device or services sales to match the hundreds of billions that these large AI companies are throwing down every year.
> The productivity gains from LLMs are real, but not in the "replace humans" direction.
It might be the beer talking, but everytime someone comments on AI they have to say something along the lines of "LLM do help". If i'm being really honest, the fact everyone has to mention this in every comment and every blog post and every presentation is because deep down everyone isn't buying it.
Or maybe they do, but they don't want to get drawn into a totally derailing side conversation about the future of humanity and global warming and it's just a tiny acknowledgement that hey, you can throw an obfuscated blob of minified JavaScript at it and it can take it apart with way less effort from a human, which gets you to the interesting part of the RE question faster than if you had to do it by hand. By all means, don't buy it. I'm not the one getting left behind, however.
It does help A LOT in the case of security research. Particularly.
For example, I tended to avoid pen testing freelance work before AI because I didn't enjoy the tedious work of reading tons of documentation about random platforms to try to understand how they worked and searching all over StackOverflow.
Now with LLMs, I can give it some random-looking error message and it can clearly and instantly tell me what the error means at a deep tech level, what engine was used, what version, what library/module... I can pen test platforms I have 0 familiarity with.
I just know a few platforms, engines, programming languages really well and I can use this existing knowledge to try to find parallels in other platforms I've never explored before.
The other day, on HackerOne, I found a pretty bad DoS vulnerability in a platform I'd never looked into before, using an engine and programming language I never used professionally; I found the issue within 1 hour of starting my search.
Yes and at least 30 more minutes to write the report; with the help of LLM. So it still required my analysis skills but at least I was able to do it, relatively fast... Whereas I wouldn't even have considered doing this kind of stuff before due to the hassle associated with research...
There are multiple factors which are pulling me into cybersecurity.
Firstly, it requires less effort from me.
Secondly, the amount of vulnerabilities seems to be growing exponentially... Possibly in part because of AI.
I feel like it's more because the detractors are very loudly against it and the promoters are very loudly exaggerating the capabilities. Meanwhile, as a bystander who is realistic and is actually using it, you have moments where it's absolutely magnificent and insanely useful and other moments where it kinda sucks, which leads to the somewhat reluctant conclusion that:
> The productivity gains from LLMs are real, but not in the "replace humans" direction.
Meanwhile the people who are explicitly on a side either say that there are no productivity gains or that nobody will have jobs in 6 months.
The article is literally about how much/if AI help. There is literally only two possible opinions someone can have on the subject: either they do or they don't.
Heavy use in CTFs doesn't surprise me at all. CTFs often throw curveballs or weird technologies that contestants might not be familiar with. Now you can get a starting point on what's going on, or how something works, instantly from an LLM and it's not a major problem if the LLM is wrong, you may just lose a little time.
Which makes me think: yes, llms can solve some of this, but still only some. It's more than a research tool, when you combine tools and agentic workflows. I don't see a reason it should slow down.
phones have 128GB storage and are vastly more powerful than the workstation i did my grad thesis on. Now that electron has exhausted the last major avenue for application bloat, i don't see why thin would mean anything.
I’ve had great success “leading” in one business and difficulties in another. I learned what kind of orgs I can be effective at. They’re wildly different imo.
>How do they justify their job if they can't roll out huge visible changes?
This is the core rotting value in so much of big tech. So much of your bonus, performance review, promotion package, etc is hinging on "delivering impact" (ie: doing the flashy stuff). Imagine a world where some internal R&D team took a risk on liquid design but then thought it was okay to not ship it because it didn't work out.
We used to treat macOS and Windows like direct competitors. They used to channel their efforts into competing for market share and routing one another from weak customer-bases. For a while, they did give us better operating systems.
Today, both software products are treated like monopolies. macOS is satisfied being an insular underdog, and Microsoft has no motivation to compete if Apple won't get off their ass.
Problem is that “the same” isn’t good enough. To get a promotion, you’d need to somehow prove that your specific change was so good that more customers are happy now than before.
To prove that, you need some data to compare before/after. Hm, how about how much time people send in the software? Seems like a decent proxy. Well, plenty of people are very unhappily addicted to social media. and yet that’s what companies and investors frequently look at.
It’s very hard to come up with an incentive where just keeping things the same is acceptable. I mean it’s basically an admission that you as a company cannot innovate or invent better ways for people to interact with a computer.
> In large companies people engage in politics because it becomes necessary to accomplish large things.
At a large company, your job after a certain level depends on your “impact” and “value delivered”. The challenge is getting 20 other teams to work on your priorities and not their priorities. They too need to play to win to keep their job or get that promotion.
For software engineering, “impact” or “value delivered” are pretty much always your job unless you work somewhere really dysfunctional that’s measuring lines of code or some other nonsense. But that does become a lot about politics after some level.
I would not say it’s about getting other people aligned with your priorities instead of theirs but rather finding ways such that your priorities are aligned. There’s always the “your boss says it needs to help me” sort of priority alignment but much better is to find shared priorities. e.g. “We both need X; let’s work together.” “You need Foo which you could more easily achieve by investing your efforts into my platform Bar.”
If you are a fresh grad, you can mostly just chug along with your tickets and churn out code. Your boss (if you have a good boss) will help you make sure the other people work with you.
When you are higher up, that is when you become said good boss, or that boss's boss, the dynamics of the grandfather comment kick in fully.
Agree. A fresh grad is still measured on “impact” but that impact is generally localized. e.g. Quality of individual code and design vs ability to wrangle others to work with you.
Impact is a handwavy way of saying “is your work good for the company”.
reply