Hacker Newsnew | past | comments | ask | show | jobs | submit | anonym29's commentslogin

Has a standing army of angry young men, resentful at their economic circumstances in Germany ever historically caused any problems?

There is nothing a college can teach you that you cannot learn for free online. The social environment can be replicated for free. You're not paying six figures for an education, you're paying six figures for exactly two things:

1. Someone to write lesson plans for you

2. A piece of paper that tells the world you are capable of conforming with the sometimes-frustrating impositions of an institution for 4 years without making too much of a fuss in the process


> There is nothing a college can teach you that you cannot learn for free online.

This is trivially false outside of some math, CS and sweng. Even within IT learning networking at an above basic level requires a well equipped lab.


What you need is the exams! Maybe $1000 to sit a bunch of paper or computer based exams in a hall. Teach yourself beforehand.

I've interviewed Harvard CS grads for SWE roles at big tech who couldn't write a working program for fizzbuzz, for defanging an IP address, or for reversing words in a sentence, in a language of their choice, with leetcode's provided instructions, in half an hour, with unlimited attempts, gentle coaching from me, and the ability to use the internet to search for anything that isn't a direct solution (e.g. syntax).

Yes, more than one.

Either the bar for getting into Harvard cannot possibly be as high as it's made out to be, someone's figured out how to completely defeat degree-validation service providers, or Harvard is happy to churn out a nonzero number of students wholly unprepared for meeting extremely basic expectations for the prototypical job of their chosen degree.


>Harvard is happy to churn out a nonzero number of students wholly unprepared for meeting extremely basic expectations for the prototypical job of their chosen degree

From one of my professors who did their graduate work at an Ivy, apparently there are a lot of rich kids who can't be failed because their parents donate so much money to the school. But I don't think Harvard has ever had the best undergraduate reputation (among the Ivies), its more well known for its grad/professional programs.


From the people I know that studied at places like Harvard and Yale (which since I'm Canadian isn't a huge sample size), I've been told that there are essentially two different streams of undergrads there - those on legacy admissions and those who qualified otherwise (either via brains, affirmative action, or other means). I was left with the impression that the legacy admissions are mostly people who've coasted through life. The rest are a full spectrum of people.

Most of Harvard's endowment is via alumni, so it doesn't surprise me in the least they continue with it.


If you don’t cram for leetcode, you won’t pass a leetcode interview. It takes some kids a few interviews to figure that out, even they are from elite school like MIT. You were just their learning experience.

If you can't solve FizzBuzz in half an hour with a language of your choice while being able to look up syntax, your problem isn't that you failed to cram for leetcode, it's that you don't know how to write code.

There's nothing inherently wrong with not being able to write code, but you probably shouldn't be applying for software engineering roles where the main responsibility of the job is ultimately to write working code.


Just to be clear I have no problem passing these interviews, I just spent a few weeks cramming leetcode and got a job at Google. Leetcode wasn’t the main reason I was hired, but it was a filter that I had to get through (I’ve never been given fizzbuzz before, but I assume that is just because it’s no longer in style and hasn’t been for more than a decade). You just don’t throw yourself into on the fly coding, you practice them because your competition has and you will look bad if you don’t. Let’s not pretend that any of us are ready to do alien dictionary at the spur of a moment, or thats a useful skill for our role.

I'd agree with you 100% if these were Leetcode mediums and hards. They were not, these were quite literally the easiest LC easies I could find.

While my career involves writing code, I am not a SWE, I have never done any formal leetcode prep, and I have no formal education in technology beyond a high school CS class. I have no college degree whatsoever, not even an associate's degree.

I had a rule I stuck to when doing these interviews (which were for a SWE role) that felt very fair to me - I would not give these candidates any problem I couldn't solve in the same circumstances.

For reference, in the allotted time, one such candidate spent a good chunk of their time reading up on JS if/then syntax on w3schools. As I watched, I reminded them they could use any language they wanted, if they were more comfortable or familiar with others, and this Harvard CS grad declined, stating JS was their "strongest" language.

My best guess about these cases were rich kids / legacy admissions that weren't allowed to be failed for political reasons.


I don’t know much about Harvard except like Stanford computer science became the biggest major by far in the last couple of decades. It could be a lot of rich kids are choosing it’s a major without much of a passion for it. It could have also become the default major for people who are planning to got into politics, business, management, or even law (Harvard’s traditional strengths).

Don’t get me wrong, we don’t have much of a choice in evaluating especially junior hires. Even for senior hires you want to make sure they haven’t drifted through their last jobs without actually coding. But on the spot performances are different even for the simple stuff, they should practice coding questions on the fly regardless, and even the worst possible SWE candidate should be able to pass these with a bit of prep. With a lot of prep they could do leetcode, a still suck at the job when they get it.


This is FizzBuzz:

1. Output the numbers from 1 to 100

2. If the number is a multiple of 3, write Fizz instead of the number

3. If the number is a multiple of 5, write Buzz instead of the number

4. If the number is a multiple of 3 and 5, write FizzBuzz instead of the number

Does that really sound like something requiring special practice and preparation? Assuming a decent interviewer would help out with the modulo operator if that was unfamiliar


Is it provided as you described or is it more like “please do FizzBuzz”? If it’s the latter, that would explain why some people may have trouble with this task… I think we both agree it’s ridiculous to test if the interviewee knows what FizzBuzz stands for, and yet… let’s just say i know a few people who treat interviews as a jargon recall context.

I don’t know what kind of psychopath would provide a problem with the expectation that you already know the problem by heart

I get the impression you latched on to the word leetcode and took away something very different

FizzBuzz, reversing a sentence -- this is programming your way out of a wet paper bag, not elite and esoteric skills that need advanced study and cramming


Similar concept. You have them do some task like fizzbuzz to see if they can program stuff on the fly that they would never need to do in real life. You practice that since school doesn't prepare you for that unless you do ACM programming contests or something. The interview demands this to see if the candidate is capable of cramming for the interview, which correlates with the effort, ability they could put into the job, not with what the skills they actually apply on the job, which are hard to measure in a one hour interview slot anyways.

If someone doesn't know how to reverse words in a sentence they are absolutely not qualified to be a programmer. Yes they probably won't do this exact task often, but this is like a doctor that can't distinguish heart from the liver. It tells you something has gone horribly wrong.

In many languages, the basic version can be just one line of code, if you know the right libraries to leverage. C# leveraging Linq, for example:

    String.Join(" ",
      String.Split(" ", sentence).Reverse()))

What if the sentence is in Japanese (which doesn’t use spaces)?

Japanese can use spaces and does in some contexts

I agree that some random leetcode-hard problem is not a good indicator, but if you can’t write fizzbuzz or can’t sum an array of integers, you’ve given me important data about your skills as a programmer on that day.

I’ve never had an interview question that asked me to do something straightforward. If I did get a question like that, I would be immediately suspicious about what the catch was.

For campus, we ask very straightforward questions to try to weed out the very lowest of coding fluency at that early stage. (Basically to try to guard against late track changers who haven’t actually coded but know that the SWE market is better than whatever their original interest was.)

If I ask that of a senior candidate, it’s because I got a whiff of “this candidate might not be able to code at all, and I’d like to save us both some time and frustration.”


We ask of every candidate. At least half the time, I wish I'd done so before getting invested in the "experience" portion, when that ends up not actually translating to ability (and believe me, I am trying to help them to succeed)

The beauty is, even a simple exercise answered quickly like "sum of integers" provides ample opportunity to learn a lot about how they think.

Start digging in to testability, requirement changes, etc. Change it to a rolling sum (producing a sequence instead of a single value). Do they use an array or an iterator? Do they output straight to the console, or produce an actual function? Could the numbers come from other sources (database, queue, etc)? What might the tradeoffs be? If there's something they are unfamiliar with, are they quick on the uptake if you explain it? And so on.


I don’t know, I still think 22 year old me might still flub even a simple on the fly question (granting that I do my first internship with IBM writing lots of code when I was 20).

If they flub half of the time and go on seven such interviews, they have over a 99% chance to pass at least one of them.

And that’s for someone with only a 50/50 success rate at summing an array of integers. Do you want to hire someone for a software role who is an underdog to be able to sum an array of ints?


Interviews are learning experiences, you get better at it the more often you do them. My first comment in this thread was that this guy was just a learning experience for these students. Summing integers is easy, understanding someone’s rushed description of what they want done along with rushing to code or write a solution on a whiteboard is the hard part.

Yeah, LeetCode interviews are their own weird universe. Even smart people get wrecked until they realize you have to treat it like an exam. Most failures aren’t about ability, it’s just pattern recall under pressure. I’ve passed some rounds I had no business passing just because I stayed calm. StealthCoder helped me a bit there since it keeps me from blanking during the actual interview.

> FizzBuzz is now a "leetcode question"

I'm curious about the degree validation thing. Did you or your employer validate the degree before the interview?

In 2025, the US Federal government pulled in a a grand total of $5.16 trillion in revenue.

Giving all 258 million adult US citizens $1000 a month totals to $3.096 trillion per year.

Giving them all $2000 a month totals to $6.192 trillion per year - more than all US tax revenue from all sources combined.

Of course, we already have a $1.7 trillion deficit, with $38 trillion and counting in debt without the UBI, and I assume you're not planning on defaulting on our $1 trillion+ in annual interest payments on the debt either, right?

How about Social Security, Medicare, and Medicaid, which by themselves take up over half of the entire federal budget, are we keeping those too?

If you'd like, we could confiscate 100% of the assets of every billionaire that's a US citizen and hope to sell all of the non-liquid asssets at market prices, that'll get us 9 months worth of current federal spending levels - less if we're adding UBI on top and not getting rid of any other programs.

Now if you want to get creative, we could keep funding the military and use it to go after all of the other global billionaires, that'll get us almost through a full 4-year presidential cycle, at the low, low cost of invading just about every other sovereign nation on earth to rob their citizens, too.

We could also have the treasury start minting trillion dollar coins to both pay off the debt and fund the UBI, but I don't think you're going to like your $2000 monthly UBI check as much when market rent on a studio is $200,000 a month.

If you have better ideas on how to pay for this, I'm all ears.


Pretty much every economist who has ever thought seriously about UBI has already given an answer. Most of the funding would come from abolishing progressive income taxes. Instead, the highest (or second highest) bracket would start at 0. With the current federal tax rates and incomes, that would raise an additional ~$3 trillion/year.

How will it affect your life any more than the military getting trillions and AI companies gobbling billions right now?

Firebase, GMS (Google Mobile Services). The Alphabet Corporation is part of many security and privacy conscious users' threat model, and these users aren't generally thrilled about leaking even limited message metadata like timing to their adversary, particularly when that adversary is known to cooperate with global passive adversaries.

There are actually two builds of Molly: Molly and Molly-FOSS. IIRC Molly uses regular Firebase, which can be faster and more reliable but comes with the above tradeoffs, while Molly-FOSS uses UnifiedPush.

Your point about exercising caution with forks of encrypted messaging apps is a great rule of thumb, and in general, social proof should NOT substitute for competent software security specialists reading and evaluating source code, but given you seem to trust GrapheneOS, it's worth noting that they've formally endorsed Molly: https://xcancel.com/GrapheneOS/status/1769277147569443309


> Your point about exercising caution with forks of encrypted messaging apps is a great rule of thumb, and in general, social proof should NOT substitute for competent software security specialists reading and evaluating source code

Also a great point :) And thank you for the reference.


UnifiedPush not works if you not use Molly exclusively on one device. So of you sync between Signal on Win desktop and Android device, your battery drain faster.

>What database?

The local database used by Signal to organize every message, every contact, every profile photo, every attachment, every group, basically every dynamic piece of data you interact with in the app.

Signal is basically a UI layer for a database. The in-transit encryption is genuinely good enough to be textbook study material for cryptographers, but the at-rest encryption became a joke the moment they stopped using your pin to encrypt the local DB and requiring it to open the app.

As someone who's been enthusiastic about Signal since it was TextSecure and RedPhone, the changes made over the years to broaden the userbase have been really exciting from an adoption perspective, and really depressing from a security perspective.

TL;DR of Molly is that it fixes/improves several of those security regressions (and adds new security features, like wiping RAM on db lock) while maintaining transparent compatibility with the official servers, and accordingly, other people using the regular Signal client.


Signal is an end-to-end encrypted messaging app. People continue to breathlessly mentioning the lack of database encryption as a problem, but that never made it a real security issue: its job is not, and has never been, dissuading an attacker who has local access to one of the ends, especially because that is an incoherent security boundary (just like the people who were very upset about Signal using the system keyboard which is potentially backdoored - if your phone is compromised, of course someone will be be able to read your Signal messages).

Database encryption isn't comparable to the keyboard drama. Protecting against malware in your keyboard can be done by using a different meyboard and is of course out of scope.

But if my phone gets taken and an exploit is used to get root access on it, I don't want the messages to be readable and there's nothing I can do about it. It's not like I can just use a different storage backend.

It's also a very simple solution - just let me set an encryption password. It's not an open-ended problem like protecting from malware running on the device when you're using it.


If someone has root access to your apparently unencrypted phone, then they can just launch the Signal app directly and it'll decrypt the database for them.

Which is to say this is an incoherent security boundary: you're not encrypting your phone's storage in a meaningful way, but planning to rely on entering a pin number every time you launch Signal to secure it? (Which in turn is also not secure because a pin is not secure without hardware able to enforce lock outs and tamper resistance...which in this scenario you just indicated have been bypassed).


Any modern Android is encrypted at rest, but if your phone is taken after first unlock, they get access to the plaintext storage. That's the attack vector.

A passphrase can be long, not just a short numeric PIN. It can be different from the phone unlock one. It could even be different for different chats.


> As someone who's been enthusiastic about Signal since it was TextSecure and RedPhone, the changes made over the years to broaden the userbase have been really exciting from an adoption perspective, and really depressing from a security perspective.

As always, it depends on your threat model.

I use signal because I value my privacy and don't trust Facebook. Not because I'm an activist. So I'm in the target group for Signal's new behavior and I welcome it (especially since to use it to share personal information that I don't want Facebook or advertisers to get, I need my parents and in-laws to use it as well, so it must be user friendly enough).

I wish they continue moving forward in that direction by the way and allow shared pictures to be stored directly on the phone's main memory (or at least add an opt-in setting for that), because the security I get from it not being is zero and the usability suffers significantly.


You're absolutely right that the appropriate level of security does depend on someone's threat model, but I do want to point out that you don't need to be an activist to benefit from privacy.

I'm a really big fan of the airport bathroom analogy. When you use the restroom in the airport, you close the stall door behind you.

You're not doing anything wrong, you have nothing to hide, and everyone knows what you're doing. But you take actions to preserve your privacy anyway, and that's good.

Everyone deserves privacy, and the psychological comfort that comes with it. Dance like nobody's watching, encrypt like everyone is :)


That's not the point the GP was making. They meant "I'd rather give up a bit of privacy for a big increase in usability, as I'm not in the group of people that needs extreme privacy". I happen to agree with them, I get more benefit from a fairly-private messaging app my friends can use than from an extremely-private messaging app nobody in my social circle can use.

> I get more benefit from a fairly-private messaging app my friends can use than from an extremely-private messaging app nobody in my social circle can use.

This is a much better way of saying what I wanted, thank you.


Isn't the phone filesystem encrypted?

Depends on quite a few other factors, but if someone with a GrayKey or Cellebrite appliance gets your phone, there's a good chance they can get in both in BFU and AFU states, even if locked. Once unlocked (or broken into), stock Signal offers you zero protection, while Molly forces them to start a brute force attack against the password you gave Molly.

This is less true for fully patched GrapheneOS devices than it is for fully patched iOS and other Android devices, but this space is basically a constantly evolving cat and mouse game. We don't get a press release when GrayKey or Cellebrite develop a new zero day, so defense in depth can be helpful even for hardened platforms like GOS.


I don't think this makes a lot of sense because, if the password is quick and easy to type, it can probably be cracked by any such device in the time it takes for a single keystroke. A long and complex password might hold up okay, but for it to actually be secure, you would have to type in the whole password on a phone keyboard every single time you opened the app, which sounds like a terrible experience.

I think, if you were actually willing to do that, it would probably be about as convenient and at least as effective to leave the device powered off and rely on the device full disk encryption and hardware security to protect the data at rest, only powering it on occasionally to check or send messages, then immediately powering back off.


You do not have to type the whole password every time you open the app, only when the database is locked. You can manually lock from an unlocked state whenever you want, even from other contexts (the button to lock it is available in the background notification) or configure an automatic timeout (which is granular down to the second) to lock the database.

Used to be supported in Android (and how I had my phones setup before). Since Android 13 it's no longer possible https://source.android.com/docs/security/features/encryption...

Their justification here https://source.android.com/docs/security/features/encryption is that

> Upon boot, the user must provide their credentials before any part of the disk is accessible.

> While this is great for security, it means that most of the core functionality of the phone is not immediately available when users reboot their device. Because access to their data is protected behind their single user credential, features like alarms could not operate, accessibility services were unavailable, and phones could not receive calls.

I'm sure they could have found a better approach, instead of file based encryption, but must have been nice to simplify engineering overhead and giving 3 letter agencies, at the same time, something that simplifies their work.


Yes, but this was not about the database, really.

Meh, most phones have full disk encryption. For the average person, encryption at rest in signal doesn't provide very much.

No they don't. The current releases for android and iOS both use file based encryption on most supported phones.

I mentioned some of the pragmatic constraints of fully trusting typical Android / iOS FDE to fully protect the confidentiality of Signal messages in another comment above that I would encourage you to read.

That said, Molly definitely isn't designed for the average person's threat model, that's totally true, but it's also worth noting that just because someone isn't aware of a certain risk in their threat model, that doesn't mean they will never benefit from taking steps to proactively protect themselves from that risk.

IMO, security and privacy are best conceptualized not as binary properties where you either have it or you don't, but rather as journeys, where every step in the right direction is a good one.

I'd always encourage everyone to question their own assumptions about security and never stop learning, it's good for your brain even if you ultimately decide that you don't want to accept the tradeoffs of an approach like the one Molly takes towards at-rest encryption.


I assume its your comment about if the phone is compromised they still need to bruteforce the signal db.

I find that unconvincing. If your phone is hacked, your phone is hacked. I think its bad to make assumptions that an attacker can compromise your phone but not log keystrokes. I'm not super familiar with state of the art of phone malware and countermeasures, but i think anything trying to be secure in the face of a compromised platform is like trying to get toothpaste back in the tube.

> it's also worth noting that just because someone isn't aware of a certain risk in their threat model, that doesn't mean they will never benefit from taking steps to proactively protect themselves from that risk.

Threat models are just as much about ensuring you have all your bases covered as ensuring you don't spend effort in counterproductive ways.

> IMO, security and privacy are best conceptualized not as binary properties where you either have it or you don't

I agree. I think security is relative to the threat you are trying to defend against. There are no absolutes.

> but rather as journeys, where every step in the right direction is a good one.

Here is where i disagree. Just because you take a step does not mean you are walking forward.

A poorly thought out security measure can have negative impacts on overall system security.


Going through customs, in most countries their policies allow them to search, image, or confiscate your phone, but not evilmaid it or rubberhose you. For some travelers, that's their threat model.

I find it hard to believe they would be able to compel you to unlock your phone but not compel you to unlock an individual app.

Cellbrite exists...

States are responsible for orders of magnitude more innocent human deaths than every "terrorist" group in human history combined.

They’re also responsible for the preservation of more human life and well being than any other organization.

Man, some people really want humanity to be banging rocks against rocks to scare off the Jaguars again.

Sure, but also for y'know, basic civilization and stuff like, idk, food safety, hospitals, roads, disaster preparedness, medicine approvals, building guidelines and all sorts of things that probably end up saving a lot more lives than they cost.

>America also has a party that always runs on the idea of small government and restoring rights to the people. Every time they get power, they do the exact opposite.

You seem to be confused. The Libertarian Party never gets any power. The closest we get is representatives like Ron Paul, Justin Amash, and Thomas Massie, who run as Republicans (which are NOT the party of small government, despite what you may have been told) while acting much more like Libertarians.

Thomas Massie in particular is famous for frequently and routinely standing up against Trump, much to Trump's chagrin.


> Republicans (which are NOT the party of small government, despite what you may have been told)

I believe that's the point.

The Republican Party *pretends* to be "small government", but isn't.


I have never understood why, in these discussions, nobody brings up other specialized silicon providers like Groq, SambaNova, or my personal favorite, Cerebras.

Cerebras CS-3 specs:

• 4 trillion transistors

• 900,000 AI cores

• 125 petaflops of peak AI performance

• 44GB on-chip SRAM

• 5nm TSMC process

• External memory: 1.5TB, 12TB, or 1.2PB

• Trains AI models up to 24 trillion parameters

• Cluster size of up to 2048 CS-3 systems

• Memory B/W of 21 PB/s

• Fabric B/W of 214 Pb/s (~26.75 PB/s)

Comparing GPU to TPU is helpful for showcasing the advantages of the TPU in the same way that comparing CPU to Radeon GPU is helpful for showcasing the advantages of GPU, but everyone knows Radeon GPU's competition isn't CPU, it's Nvidia GPU!

TPU vs GPU is new paradigm vs old paradigm. GPUs aren't going away even after they "lose" the AI inference wars, but the winner isn't necessarily guaranteed to be the new paradigm chip from the most famous company.

Cerebras inference remains the fastest on the market to this day to my knowledge due to the use of massive on-chip SRAM rather than DRAM, and to my knowledge, they remain the only company focused on specialized inference hardware that has enough positive operating revenue to justify the costs from a financial perspective.

I get how valuable and important Google's OCS interconnects are, not just for TPUs or inference, but really as a demonstrated PoC for computing in general. Skipping the E-O-E translation in general is huge and the entire computing hardware industry would stand to benefit from taking notes here, but that alone doesn't automatically crown Google the victor here, does it?


Is including a json schema validator and running the output through the validator against the input prompt, such that you can detect when the output doesn't match the schema, and optionally retry until it does match (possibly with a max number of attempts before it throws an error) too complex of an idea for the target implementation concept you were envisioning?

It certainly doesn't intuitively sound like it matches the "Do one thing" part of the Unix philosophy, but it does seem to match the "and do it well" part.

That said, I can totally understand a counterargument which proposes that schema validation and processing logic should be something else that someone desiring reliability pipes the output into.


I'm not sure. I think I need to use it more to see what the LLMs do with bad data. The design you're suggesting might be the answer though.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: