Hacker Newsnew | past | comments | ask | show | jobs | submit | bbminner's commentslogin

I still consider jax.vmap to be a little miracle: fn2 = vmap(fn, (1,2)), if i remember correctly, traverces the computation graph of fn and correctly broacasts all operations in a way that ensures that fn2 acts like fn applied in a loop across the second dimension of the first argument (but accelerated, has auto-gradients, etc).

A long while ago i heard something (that might have been a urban myth) about Bose putting useless weight into their headphones to make them appear more "substantially professional". Is that a myth or they have pivoted towards actual quality since early days?

You’re probably thinking of Beats. But even then, it turns out that the original story was wrong because they were knock-off Beats.

https://www.core77.com/posts/38311/Teardown-Reveals-Beats-He...


There used to be a whole culture of bose kind of being a-holes. (Like 20 years ago.) I used to work at CNET back then and there was a kind of "yeah bose is ok" kind of vibe but it was always tinged with "but they want to sue you if you say mean things" whether they did or not.

As far as I know now, things have changed substantially. I would assume this includes engineering quality and honesty.

This bricking avoidance seems like another note in that positive direction.


My understanding of modern Bose kit based on RTINGS reviews is that it's fairly competitive in its price range. Still a touch pricey for what you get, but not bad by any means—like 2nd/3rd best, and occasionally punching above its weight for their midrange offerings. They seem to be #1 for comfort (headphones) though.

I don't own any, I've just read reviews from when I was in the market for new headphones and earbuds.


Their headphones are remarkably comfortable

I have a pair of Bose QC35s and a set of AirPod Pros. More often than not, for a late night YouTube binge in bed, I’m reaching for the QC35s.

I believe that's always been a thing. A long time ago I read this teardown article [1] of real vs counterfeit Beats headphones. And even the counterfeit versions had metal weight added to make it feel like the real Beats headphones!

[1] https://blog.bolt.io/our-beats-were-counterfeit/


Those were Beats, not Bose, and it was true. IMO Bose does a great job of being both consumer friendly and high quality. There are others with higher fidelity for the same price (Shure, Sennheiser) but you often lose the comfort and portability Bose offers.

Can you give the source for that? The one posted shows the claim was debunked by the fact that they were not really Beats.

https://www.core77.com/posts/38311/Teardown-Reveals-Beats-He...


Their aviation headsets are infamous for being heavy and the latest generation of the A30s haven't changed much except it's much lighter because they swapped out some metal parts for plastic.

That was Beats. Pre acquisition IIRC, it was years ago.

Can you give the source for that? The one posted shows the claim was debunked by the fact that they were not really Beats.

You are right, my memory only includes the original report and not the follow-up that determined it was bunk. Sadly it is too far past my post so I cannot edit it. Apologies for persisting bad info

I'd be curious to know the breakdown of "wages and benefits" between academics, teachers and administrative staff. I've heard that admin takes up a huge fraction of the cost. How large can it be?


From https://dukechronicle.com/article/duke-university-facility-a...:

> Duke has a F&A rate of 61.5% with the NIH, which means that for every dollar provided to a Duke faculty member conducting research, an additional 61.5 cents is given to the University to compensate for its F&A costs.

This is not an uncommon overhead rate for a large university, and is competitive with overhead rates at the largest government contractors. That doesn't mean it's entirely reasonable or a sign of an efficient operation.


What distinction do you draw between academics and teachers? Those are usually overlapping roles.

According to https://publicaffairs.vpcomm.umich.edu/key-issues/compensati... (just an example of a public university), it's $376K to executives, $481K to deans, and $152.7K to faculty in FY2013. Deans usually count as ~50% admin, so we could call that $376K + $240.5K = $616.5K to admin and $240.5K + $152.7K = $393.2K to faculty, roughly a 3:2 ratio.


I'm an academic and its difficult for me to imagine what the fuck deans do that is worth ~3-4 times as much as the people actually teaching and doing research. Fire them into outer space, I say.


> I'm an academic and its difficult for me to imagine what the fuck deans do that is worth ~3-4 times as much as the people actually teaching and doing research. Fire them into outer space, I say.

I'm also an academic. To me, the primary role of a dean is to insulate me as much as possible from upper admin. I've had deans who are good at this job, and those who either aren't good at it, or think that their job is something else. The ones who are good at what I think their job is ... I'm not sure I'd want to see them get 3–4x my pay, but I'm definitely willing to pay a premium to have someone else deal with upper admin.


So it’s a management layer created to help protect people who actually provide value from the OTHER management layer. Sounds like a made up problem to me, and also an example of what everyone complains about when it comes to higher education: too much admin pushing costs higher.


I mean this is an issue in private industry as far as I've seen as well. as a company grows layers of middle management are added to translate and implement policies from other management layers


My relative is an administrator. One of the things he does is to manually process the flood of requests to override this or that policy because the system for enforcing the complex course selection and graduation requirements (e.g., prerequisites etc) doesn't work perfectly. The other is to adjust those requirements on a real time basis to comply with this or that complex, contradictory, and unclear mandates handed down from above (such as getting rid of all traces of wokeness).

Pay him his professor salary, and he'd never have stepped up to the role.

"All complex systems operate in failure mode 100% of the time." What this means is that systems operate with some of their automatic controls bypassed, and with those processes being carried out manually. The Gimli Glider took off with two broken fuel gauges.

My thought about bureaucracy is that you can automate complex human processes only to a certain point, and then the system needs some manual override capability, and possibly human interfaces, to work. This is what bureaucrats do. The reason why its seems chaotic and inefficient is that the easy stuff has been automated away, leaving only the hard stuff.

I can't vouch for every bureaucratic process, and bureaucrat, being optimally efficient or necessary. But in the past few months, I've observed the hard lesson of what happens when you think you can deal with bureaucracies that you think are wasteful by taking a chainsaw to them. I don't believe in that approach any more, even for dealing with systems that I hate.


> The reason why its seems chaotic and inefficient is that the easy stuff has been automated away, leaving only the hard stuff.

Some combination of:

* https://en.wikipedia.org/wiki/Selection_bias

* https://en.wikipedia.org/wiki/Survivorship_bias


"Academic" is kind of a broad brush. A professor and a teacher are both academics. One difference is tenure and research. A professor is eligible for tenure, and expected to do research or scholarship. They can train grad students.

In contrast, most undergraduate teaching is done by "adjuncts" for whom the job is essentially gig work. Moreover, professors are considered "faculty" and adjuncts "staff," making it confusing to figure out how many employees of a university are engaged in teaching versus doing other things. For instance a faculty-to-staff ratio would be misleading.

Disclosure: I was an "adjunct" many years ago.


In my experience, adjuncts are not staff. They (we) are not considered faculty either at most institutions, but are a separate category of their own.

Adjuncts are basically contract labor who in general produce much more revenue than they cost.


If you’re not directly teaching or doing research, imo you’re an admin or “staff”. You need to look at what people actually do.


I was really confused about the case folding, this page explained the motivation well https://jean.abou-samra.fr/blog/unicode-misconceptions

""" Continuing with the previous example of “ß”, one has lowercase("ss") != lowercase("ß") but uppercase("ss") == uppercase("ß"). Conversely, for legacy reasons (compatibility with encodings predating Unicode), there exists a Kelvin sign “K”, which is distinct from the Latin uppercase letter “K”, but also lowercases to the normal Latin lowercase letter “k”, so that uppercase("K") != uppercase("K") but lowercase("K") == lowercase("K").

The correct way is to use Unicode case folding, a form of normalization designed specifically for case-insensitive comparisons. Both casefold("ß") == casefold("ss") and casefold("K") == casefold("K") are true. Case folding usually yields the same result as lowercasing, but not always (e.g., “ß” lowercases to itself but case-folds to “ss”). """

One question I have is why have Kelvin sign that is distinct from Latin K and other indistinguishable symbols? To make quantified machine readable (oh, this is not a 100K license plate or money amount, but a temperature)? Or to make it easier for specialized software to display it in correct placed/units?


They seem to have (if I understand correctly) degree-Celsius and degree-Fahrenheit symbols. So maybe Kelvin is included for consistency, and it just happens to look identical to Latin K?

IMO the confusing bit is giving it a lower case. It is a symbol that happens to look like an upper case, not an actual letter…


And why can't the symbol be a regular old uppercase "K"? Who is this helping?


Unicode wants to be able to preserve round-trip re-encoding from this other standard which has separate letter-K and degree-K characters. Making these small sacrifices for compatibility is how Unicode became the defacto world standard.


The "other standard" in this case being IBM-944. (At least looking at https://www.unicode.org/versions/Unicode1.0.0/ch06.pdf p. 574 (=110 in the PDF) I only see a mapping from U+212A to that one.)


The ICU mappings files have entries for U212A in the following files:

    gb18030.ucm
    ibm-1364_P110-2007.ucm
    ibm-1390_P110-2003.ucm
    ibm-1399_P110-2003.ucm
    ibm-16684_P110-2003.ucm
    ibm-933_P110-1995.ucm
    ibm-949_P110-1999.ucm
    ibm-949_P11A-1999.ucm


[flagged]


That "deeper explanation" seems incorrect, considering that the KSC column is empty in the mapping linked above.


I think just using uppercase Latin K is the recommendation.

But, I dunno. Why would anybody apply upper or lower case operators to a temperature measurement? It just seems like a nonsense thing to do.


Maybe not for text to be read again, but might be sensible e.g. for slug or file name generation and the like...


That’s an interesting thought.

IMO this is somewhere where if we were really doing something, we might as well go all the way and double check the relevant standards, right? The filesystem should accept some character set for use as names, and if we’re generating a name inside our program we should definitely find a character set that fits inside what the filesystem expects and that captures what we want to express… my gut says upper case Latin K would be the best pick if we needed to most portably represent Kelvin in a filename on a reasonably modern, consumer filesystem.


I wonder if you can register a domain with it in the name.


A symbol may look differently than original letter, for example N - №, € - E (Є), S - $, integral, с - ©, TM - ™, a - @, and so on.

However, those symbols doesn't have lower case variants. Moreover, lower case k means kilo-, not a «smaller Kelvin».


Although it is a prefix in that case, so we should expect to see k alone.

To maximally confuse things, I suggest we start using little k alone to resolve another annoying unit issue: let’s call 1 kilocalorie “k.”


Probably useful in a non-latin codeset?


having a dedicated Kelvin symbol preserves the semantics.


> One question I have is why have Kelvin sign that is distinct from Latin K and other indistinguishable symbols?

To allow round-tripping.

Unicode did not win by being better than all previously existing encodings, even though it clearly was.

It won by being able to coexist with all those other encodings for years (decades) while the world gradually transitioned. That required the ability to take text in any of those older encodings and transcode it to Unicode and back again without loss (or "gain"!).


> One question I have is why have Kelvin sign that is distinct from Latin K and other indistinguishable symbols?

Unicode has the goal of being a 1:1 mapping for all other character encodings. Usually weird things like this is so there can be a 1:1 reversible mapping to some ancient character encoding.


It is very interesting though! I have been interested in this kind of language design for interactive UI for a while. If there was a quick article outlining how all the "with" and "on" and "own" work to more experienced developers using references to existing language features, I'd love to read it. Right now it reminds me of the declarative style of qt ui and online primitives introduced in godot, but i haven't looked at it in more details. Also love your take on async. Wish you all the best luck, this seems like a really thought through language design!


This is a very kind comment, thank you! Yes it has been a LOT of iteration to make the language what it is. I think it would make sense to have a page for experienced developers to better understand what Easel is. Right now maybe the closest is this page: https://easel.games/docs/learn/key-concepts

Thanks again for your kind words!


This is really cool, these patterns (run once now and then once triggered) surface all the time and usually turn into ugly code! How many interations did it take?

So most lines like A { B{ on D{ print() } } C{} } equivalently desugar into something like a = A; b = B(); a.mount(b); d = D(); d.on(f); b.mount(d); .. ?

I got confused by a couple of things. One of them is whether object parameters act like context parameters and there for depend on names in the caller variable scope? Ie if i define 'fn ship.Explode', i must have variable ship at call site? But i can still otherwise pass it explicitly as alien_ship.Explode(), right? How do i know if a particular call takes the current object into account? If i have two variables in my nested scope: ship and asteriod and both have ship.Explode and asteroid.Explode, which one is picked if i do just `Explode`? The innermost? Or I can't have two functions like that because the first thing is literally just a named variable and not a "method"?

Overall, if you could provide some examples of how things could have de-sugured into a different language, that'd be very interesting! Maybe with some examples of why this or that pattern is useful? I think it does a good job for things like on / once, but I'm not grokking how one would structure an app using this variable scoping use clause and object parameters.

Also not sure how to define functions that could be on'd or once'd. (Ah, i see, delve)


Did HoMM series ever get popular outside Eastern Europe? I am yet to meet a person born and raised in the US who have heard about it, but maybe i am just unlucky or targeting a wrong demographic.


I was wondering about the reason and suspect lack of advertising. There was an ad campaign in Computer Gaming World Magazine http://heroescommunity.com/viewthread.php3?TID=40698&pagenum...

CGW issue 144 1996/07 small piss poor banner

CGW 146 1996/09 full page but confusing with only two tiny bad game screens and wall of text

CGW 147 1996/10 - month of release - same wall of text and one slightly better tiny screen

CGW 148 1996/11 half page ad but finally full of good screens and actual description what the game is all about

CGW 151 1997/02 they finally got a hang of this, detailed description of what to expect from teh game

CGW 152 1997/03 CGW 153 1997/04 CGW 156 1997/07 reverting to 148 style

CGW reviewed in very late in 1997/02 giving it 100%.

I cant find any ads in period correct PC Gamer. They reviewed it great, but annual PC Gamer Top 100 1996 didnt even mention it, and 1997 put it far back at 25 https://www.pixsoriginadventures.co.uk/pc-gamer-top-100-1997...


I played Might and Magic as a child but I never knew about this series nor have I ever played it. I also don't know anyone who has either now that you mention it.


Let's say not as popular as other games, but it was popular enough to pirate in the US.


Friends and I loved Heroes 3, but we're Québécois.


It found a following in some parts of LATAM as well.


I think it is difficult to oversell the bob nystrom game patterns book

https://gameprogrammingpatterns.com/contents.html


While better, a person modifying PizzaDetails might or might not expect this change to affect the downstream pizza deduplication logic (wherever it got sprinkled throughout the code). They might not even know that it exists.

Ideally, imho, a struct is a dumb data holder - it is there to pass associated pieces of data together (or hold a complex unavoidable state change hidden from the user like Arc or Mutex).

All that is to say that adding a field to an existing struct and possibly populating it sparsely in some remote piece of code should not changed existing behavior.

I wonder whether there's a way to communicate to whoever makes changes to the pizza details struct that it might have unintended consequences down the line.

Should one wrap PizzaDetails with PizzaComparator? Or better even provide it as a field in PizzaOrder? Or we are running into Java-esq territory of PizzaComparatorBuilderDefaultsConstructorFactory?

Should we introduce a domain specific PizzaFlavor right under PizzaDetails that copies over relevant fields from PizzaDetails, and PizzaOrder compares two orders by constructing and comparing their flavours instead? A lot of boilerplate.. but what is being considered important to the pizza flavor is being explicitly marked.

In a prod codebase I'd annotate this code with "if change X chaange Y" pre submit hook - this constraint appears to be external to the language itself and live in the domain of "code changes over time". Protobufs successfully folded versioning into the language itself though. Protobufs also have field annotations, "{important_to_flavour=true}" field annotation would be useful here.


I have not looked into the HM algorithm much, but is there (an educational or performance wise) advantage over implementing a (dumb) SAT solver and expressing a problem as a SAT problem? It always seemed like the "natural representation" for this kind problem to me. Does knowing that these are types _specifically_ help you somehow / give you some unique insights that won't hold in other similar SAT problems?


Keep in mind one of the most important attributes of a good compiler is clearly explaining to the user what caused compilation failure and why. If you try to solve in a very abstract and general space it could be challenging to give an actionable error message.


I suspect you've intentionally phrased this to avoid referencing type checking in particular, since this is also the main reason that mainstream programming languages tend to use hand-written parsers rather than generators from what I understand, and I imagine it applies to a lot of other features as well.


Yup, that's basically it. "SAT says no" isn't a very useful error message.


how would you encode a program like

    function f<T>(x: T) { return x; }
    function g(x: number) { return { a: f(x), b: f(x.toString()) }; }
in sat?

if that's easy, how about length and f in:

    function append<T>(xs: list<T>, ys: list<T>) {
      return match xs {
        Nil() -> ys,
        Cons(hd, tl) -> Cons(hd, append(tl, ys)),
      };
    }
    function flatten<T>(xs: list<list<T>>) {
      return match xs {
        Nil() -> Nil(),
        Cons(hd, tl) -> append(hd, flatten(xs)),
      };
    }
    function map<T, U>(f: (T) => U, xs: list<T>) {
      return match xs {
        Nil() -> Nil(),
        Cons(hd, tl) -> Cons(f(hd), tl),
      };
    }
    function sum(xs: list<number>) {
      return match xs {
        Nil() -> 0,
        Cons(hd, tl) -> hd + length(tl),
      };
    }
    function length<T>(xs: list<T>) { return sum(map((_) -> 1, xs)); }
    function f<T>(xs: list<list<T>>) {
      return length(flatten(xs)) === sum(map(length, xs));
    }
hm-style inference handles polymorphism and type application without a complicated sat encoding


For a bounded size of types of sub-expressions, HM inference is quasi-linear in the size of the program, because the constraints appearing in the HM algorithm are only equality between meta-variables.A NP-complete SAT solver is not really a good fit for this kind of simple constraints. Even more so when typechecking often represents a significant part of compilation time.

(Of course the tricky part of the definition above is that the size of types can theoretically be exponential in the size of a program, but that doesn't happen for programs with human-understandable types)


If you have a bound on the size of the largest type in your program, then HM type inference is linear in the size of the program text.

The intuition is that you never need to backtrack, so boolean formulae (ie, SAT) offer no help in expressing the type inference problem. That is, if you think of HM as generating a set of constraints, then what HM type inference is doing is producing a conjunction of equality constraints which you then solve using the unification algorithm.


I would not say that this is due to a social media bubble - HN is the only social media i use, i have friends along the political spectrum, and still i can relate to many of the points that the author raised. At one point, I found myself increasingly uncertain and conflicted about my own "actual convictions", and "underlying motives", and whether someone else (even potentially!) labeling me as a creep or assuming poor intentions automatically makes me one. Some unfortunate preceding life experiences corroded my self image as well, which might have contribute to it, but that's not the point.

I'd actually go further and argue that what appears to twist this social fabric inside out is not only the online nature of the interaction itself, but the corporate centralized algorithmic nature of it. I am in no way a proponents of decentralizing everything (social media, money, infra, etc) for the sake of it - most systems work more efficiently when centralized, that's just a fact of reality. Maybe the fact that ads, corporate communications (linkedin -speak posts / slack / mcdonald's twitter account) and social interactions now live in the same space (and barely distinguishable in feeds) must have somehow forced these spaces to use the most uniform neutered language that lacks subtleties allowed in 1:1 communications? So people speak in political slogans and ad jingles instead of actual thoughts? Because these spaces NEED people to speak like that to stay civil and "corporately acceptable"? I am just brainstorming, in no way suggesting that a "free for all" is a solution.

I watched a movie called Anora recently, and toward the end there's a dialogue along the lines of

- If not for these other people in the room, you'd have raped me! - No I wouldn't. - Why not? - (baffled and laughing) Because I am not a rapist.

One way to interpret this movie, this dialogue, and what follows is that the main female character has been used and abused her entire life by the rich / capitalist system in general / embodied by a character of a rich bratty child of an oligarch in particular - that her world almost assumes this kind of transactional exploitation as a part of human relationships - and struggles to feel safe without it - almost seeking more exploitation to feel somewhat in control. And the other person in the dialogue above (who is not a rich child) counters that by asserting and knowing very well who he is (and isn't), and that knowledgeable doesn't require or provide any further justification.

Tldr maybe the magical dream of a conflict-free society where people understand each other is not ours after all - maybe it is the ideal grassland for ad-driven social media to monetize our interactions in a safe controlled fashion? one evidence towards that is the de-personalized neutered templated nature of the kind of "advice" that people give online to earn social credit - that leaks into real world 1-to-1 interactions in the form of anxiety of being "watched and judged" - as described by the author?


Conflict is actually healthy to have as long as it’s not violent and there’s space for other ways of relating to the same person.


The most economically productive places in the world, I.e San Francisco and especially Seattle are famously passive aggressive and avoid conflict. It’s so well known in Seattle they have a name for it:

https://en.wikipedia.org/wiki/Seattle_Freeze

Maybe having conflict isn’t healthy, and letting people grumble about things under the breath is the right way forward, unironically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: