>I've learned a lot of shit while getting AI to give me the answers
I would argue you're learning less than you might believe. Similarly to how people don't learn math by watching others solve problems, you're not going to learn to become a better engineer/problem solver by reading the output of ChatGPT.
If I know what I want to do and how I want to do it, and there's plumbing to make that a reality, am I not still solving problems? I'm just paying less attention to stuff that machines can successfully automate.
Regarding leveling up as an engineer, at this point in my career it's called management.
Yes and when my employers - including Amazon - decided they wanted to stop putting money into my account, I didn’t stress.
I told my wife as soon as I had the meeting with my manager at Amazon back in 2023 about my “take $40K severance and leave immediately or try (and fail) to work through the PIP”. She asked me what were going to do? I said I’m going to take the $40K and we are going to the US Tennis Open as planned in three weeks. I called an old manager who didn’t have a job for me. But he threw a contract my way for a quick AWS implementation while I was still interviewing.
I doubt very seriously if my job fell out from under me tomorrow, someone wouldn’t at least offer me a short term contract quickly.
They didn't have a better definition of AGI to draw from. The old Turing test proved to not be a particularily good test. So lacking a definition money was used as a proxy. Which to me seems fair. Unless you've got a better definition of AGI that is solid enough to put in a high dollar value contract?
That's true, but the $100 billion requirement is the only hard qualification defined in earlier agreements. The rest of the condition was left to the "reasonable discretion" of the board of OpenAI. (https://archive.is/tMJoG)
Sabine Hossenfelder these days is a YouTube personality, who likes to discuss subjects she's not an expert in. I don't know if that's the metric for "crank", but anything I hear from her is taken with a massive grain of salt.
Sabine Hossenfelder has done a video on this. To paraphrase; she says she notices a subject people are talking about but she's not an expert in, and so she accesses some recent papers on the subject, ideally including a literature review, reads them, considers everything she's read together and forms an opinion.
I ask you, what else you expect anyone else to do? Isn't this exactly a scientific process? and anything else amounts to gatekeeping.
(quick edit: I'm all for taking everything anyone says on the internet with a grain of salt though, even peer reviewed papers shouldn't be taken uncritically)
Cold reading papers from outside your field isn't 'doing science'. As far as medicine or economics is concerned, she's identical to a layman (or worse, modulo arrogance).
Science is a collaborative social endeavor that exists under a shared set of norms and rules that have the goal of producing new knowledge. She's skipping the social part. She could email these people and ask for input! Many of her weird mistakes and misunderstandings could all be caught by cursory review from a middling grad student.
None of these papers were written for her, she is not the audience, you are not the audience. One of the points of graduate education is to get people to the point where they and meaningfully engage with the state of the art. This process takes years!
Compare her output to people like the math/comedy youtuber matt parker or the numberphile channels, which invite collaboration from the authors directly. They aren't experts themselves, but they do the work to make it interesting and present things as accurately as possible.
Every field has a shared language and culture that needs to be internalised to some degree before you can usefully engage with their contents. Some terms you think you are familiar with will have slightly different meanings within a domain, and just assuming you understand it during even a well-intentioned and careful read can still lead you astray.
The description she gives of what she is doing is a stellar example of good scientific inquiry.
The problem, or at least my perception of the situation, is that she does not do what she claims to be doing. She forms uninformed opinions optimized to be engaging, interesting, and conspiratorial, instead of boring sound interpretations of what she has read.
The sad thing is that the only way for someone reading this to know whether I am gatekeeping or warning about an actual crank is to do all of this work from scratch yourself.
(I easily concede that there are plenty of problems with the institution of "Science" today -- I just think she exploits the existence of these problems to aggrandize herself instead of engage in fixing them in a productive way)
Its the curse of engagement. If she read the literature and came to a "boring" opinion it would be much harder to gain a following online. It isn't impossible to gain a following without getting conspiratorial, but it is much harder.
It often seems to me that a person's opinion on a subject is judged particularly harsh and derisively the more they are deemed an expert on some other unrelated subject. I find this a little unfair.
Fairness doesn't come into play here, this is just about predicting which of the overwhelmingly many sources of information are worth paying attention to.
Feel free to come up with your own predictive model of whether someone is worth listening to. It's hard to compare such models fairly, but if you feel yours is better, it might be worth sharing.
Why should these things be simple? Graphics hardware varies greatly even across generations from the same vendors. Vulkan as an API is trying to offer the most functionality to as much of this hardware as possible. That means you have a lot of dials to tweak.
Trying to obfuscate all the options goes against what Vulkan was created for. Use OpenGL 4.6/WebGPU if you want simplicity.
A simple vkCreateSystemDefaultDevice() function like on Metal instead of requiring hundreds of lines of boilerplate would go a long way to make Vulkan more ergonomic without having to give up a more verbose fallback path for the handful Vulkan applications that need to pick a very specific device (and then probably pick the wrong one on exotic hardware configs).
And the rest of the API is full of similar examples of wasting developer time for the common code path.
Metal is a great example of providing both: a convenient 'beaten path' for 90% of use cases but still offering more verbose fallbacks when flexibility is needed.
Arguably, the original idea to provide a low-level explicit API also didn't quite work. Since GPU architectures are still vastly different (especially across desktop and mobile GPUs), a slightly more abstract API would be able to provide more wiggle room for drivers to implement an API feature more efficiently under the hood, and without requiring users to write different code paths for each GPU vendor.
Metal has the benefit of being developed by Apple for Apple devices. I'd imagine that constraint allows them to simplify code paths in a way Vulkan can't/won't. Again, Metal doesn't have to deal with supporting dozens of different hardware systems like Vulkan does.
Metal also works for external GPUs like NVIDIA or AMD though (not sure how much effort Apple still puts into those use cases, but Metal itself is flexible enough to deal with non-Apple GPUs).
CUDA can be complex if you want, but it offers more powerful functionality as an option that you can choose, rather than mandating maximum complexity right from the start. This is where Vulkan absolutely fails. It makes everything maximum effort, rather than making the common things easy.
I think CUDA and Vulkan are two completely different beasts, so I don't believe this is a good comparison. One is for GPGPU, and the other is a graphics API with compute shaders.
Also, CUDA is targeting a single vendor, whereas Vulkan is targeting as many platforms as possible.
The point still stands: Vulkan chose to go all-in on mandatory maximum complexity, instead of providing less-complex routes for the common cases. Several extensions in recent years have reduced that burden because it was recognized that this is an actual issue, and it demonstrated that less complexity would have been possible right from the start. Still a long way to go, though.
So I've been told so I'm trying to take another look at it. At least the examples at https://github.com/SaschaWillems/Vulkan, which are probably not 1.3/1.4 yet except for the trianglevulkan13 example, are pure insanity. Coming from CUDA, I can't fathom why what should be simple things like malloc, memcpy and kernel launches, end up needing 300x as many lines.
In part, because Vulkan is a graphics API, not a GPGPU framework like CUDA. They're entirely different beasts.
Vulkan is also trying to expose as many options as possible so as to be extensible on as many platforms as possible. Also, Vulkan isn't even trying to make it more complex than it need be--this is just how complex graphics programming is period. The only reasons people think Vulkan/DX12 are overly complicated is because they're used to using APIs where the majority of the heavy lifting comes from the drivers.
No, it is overly complex for modern hardware (unless you use shader objects). Vulkan forces you to statically specify a ton of state that's actually dynamic on modern GPUs. You could cut things down a ton with a new API. Ofc you'd have to require a certain level of hardware support, but imo that will become natural going forward.
Actually, it would be kinda neat to see an API that's fully designed assuming a coherent, cached, shared memory space between device and host. Metal I guess is closest.
> In part, because Vulkan is a graphics API, not a GPGPU framework like CUDA. They're entirely different beasts.
Tbf, the distinction between rendering and compute has been disappearing for quite a while now, apart from texture sampling there isn't much reason to have hardware that's dedicated for rendering tasks on GPUs, and when there's hardly any dedicated rendering hardware on GPUs, why still have dedicated rendering APIs?
Yes, I predict eventually we will be back at software rendering, with the difference that now it will be hardware accelerated due to running on compute hardware.
The point is that a (self-declared) low-level API like Vulkan should just be a thin interface to GPU hardware features. For instance the entire machinery to define a vertex layout in the PSO is pretty much obsolete today, vertex pulling is much more flexible and requires less API surface, and this is just one example of the "disappearing 3D API".
More traditional rendering APIs can then be build on top of such a "compute-first-API", but that shouldn't be the job Khronos.
I'm fairly sure Vulkan runs just fine on Android? You won't have access to dynamic rendering, so you'll have to manage renderpasses, but I don't think you're going to have issues running Vulkan on a modern Android device.
I have all of that but DX12 knowledge, and 50% of this article still went over my head.
reply