"Some may ask why we didn’t announce these role reductions with the ones we announced a couple months ago. The short answer is that not all of the teams were done with their analyses in the late fall; and rather than rush through these assessments without the appropriate diligence, we chose to share these decisions as we’ve made them so people had the information as soon as possible."
This seems overly cynical. It makes a lot of sense to wait until teams have completed their planning. It allows the executives to sift through the proposed projects and decide specifically which ones to cut.
Also, it is pretty easy to confirm with anybody who works at AWS that OP2 planning process did in fact complete recently
A good example of how nearly everyone can be above average is number of arms. If, like 99+% of the population, you have two arms then you have an above average number of arms.
I don't think there's anything remarkable about this website that could be considered a "big win" for Lisp. It's a totally run-of-the-mill dynamic web app that could have been developed more quickly in any web framework such as Rails or Django.
In fact, the hacky way it was implemented in Lisp had some clear downsides. In the early days (I'm probably going to mess up the details but hopefully the gist will come through) there was a notorious failure mode due to the way entities such as stories and comments were stored in memory using closures. These closures obviously had to be cleaned out periodically, and so if you stayed on a page too long and then clicked a link on the page, it would be invalid. You'd have to go back and refresh the page and click the link again. I don't think it's out of bounds to say this website is basically a rehash of the hacks pg came up with in the mid 90s to implement web apps in Lisp for Viaweb. The fact that he got rich off those hacks may have been a selling point for Lisp 20 years ago when he wrote Beating the Averages, but the world has moved on.
> But they don't mutate data and hence every function call creates a new data in memory taking up space.
Well, only if you mutate the data. And even then, most functional languages provide persistent data structures that allow efficient sharing of the unchanged parts of the data structure.
> Now my question is where do I learn about these nitty gritty's? Should I read about compilers more? Or systems? Or what?
I'd recommend choosing a functional language and an imperative language and learning both.
> And even then, most functional languages provide persistent data structures that allow efficient sharing of the unchanged parts of the data structure.
So you are saying that in imperative setting if we use the original data for multiple functions we have to create copies of it and then modify those copies. So we end up taking extra space there also. So both languages take up space. One before the function call another after.
Not exactly (if I understand the question correctly).
Imperative usually goes with mutation. If I'm willing to do mutation, then I can replace the old value with the new value in the same location, allocating nothing either before or after.
If you want to do immutable with imperative, then yes, you have to allocate sometime.
For some reason I'm kind of sad seeing Twitter's (very cool) homegrown technologies in the "Old" diagram with the "New" architecture basically Google Cloud. I'm sure it makes sense internally, but it feels like the loss of an innovation center in the streaming space.
I'd highly recommend learning lisp and writing some higher-order functions with macros. You start with an instance of a lower-order function, insert a few backticks and commas where appropriate, wrap that in a defmacro, and boom, you're a metaprogrammer. You can certainly do similar stuff in other languages, but it's almost always a cludge.
The rest of lisp is pretty easy if you've done any sort of functional programming, and view the code through a lens that translates (foo ...) to foo(...)
[0] https://upload.wikimedia.org/wikipedia/commons/9/92/Annual_A...