Hacker Newsnew | past | comments | ask | show | jobs | submit | web007's commentslogin

There's a poop symbol on the bottom right, click it to see ways to communicate.

I was surprised to learn the random messengers are other humans!


Curation is probably a good idea, but keeping context is probably a better idea.

The referenced "I don't keep history" philosophy is madness. You won't know what thing would have been useful to keep until you need it $later. Sure, you'll write down some good stuff and maybe alias it.

That's fantastic, do more of that anyway.

Don't pretend you're going to do that for every trick or gotcha you encounter, and don't think you're going to remember that one-off not-gonna-need-it thing you did last week when you find that bug again.

My local history is currently 106022 lines, and that's not even my full synchronized copy, just this machine. It's isolated per-session and organized into ~/.history/YYYY/MM/machine_time_session hierarchy. It has 8325 "git status", 4291 "ll", 2403 "cd .." and 97 "date" entries which don't matter. Literal/complete entries, not including variations like "date +%Y%m%d" which are separate. I can ignore them, either by grepping them out or filtering mentally, but something as benign as "cd .." is INCREDIBLY useful to establish context when I'm spelunking through what I did to debug a thing 2 years ago.

The even better version of both of these variants is to keep everything AND curate out useful stuff. That whole history (4 years locally) is 10MB, and my entire history compressed would be less than a megabyte.

Edit: just realized who posted this, I overlapped with Tod at my first gig in Silicon Valley. Small world!


I think it's a byproduct of how macOS itself saves and restores window state, but the macOS Terminal.app has options to control restoring scrollback on reopen. History is good, but it's really great to be able to scroll back and see more context around the commands I was running. (This can still fail me. In a rarely-touched tab I might have years of scrollback, but I currently have it set to restore 10k lines and I may find there's only a few days or hours in a tab where I ran something noisy with verbose logging...)


> LLMs are great for junior, fast-shipping devs; less so for experienced, meticulous engineers

Is that not true? That feels sufficiently nuanced and gives a spectrum of utility, not binary one and zero but "10x" on one side and perhaps 1.1x at the other extrema.

The reality is slightly different - "10x" is SLoC, not necessarily good code - but the direction and scale are about right.


That feels like the opposite of being true. Juniors have, by definition, little experience - the LLM is effectively smarter than them and much better at programming, so they're going to be learning programming skills from LLMs, all while futzing about not sure what they're trying to express.

People with many years or even decades of hands-on programming experience, have the deep understanding and tacit knowledge that allows them to tell LLMs clearly what they want, quickly evaluate generated code, guide the LLM out of any rut or rabbit hole it dug itself into, and generally are able to wield LLMs as DWIM tools - because again, unlike juniors, they actually know what they mean.


I don't think junior vs senior is actually that well defined. There are met "senior" 30 year old programmers and "junior" 30 year olds (who have also been programming for ~2 decades).


I've had my "2038 consulting" sites since Feb 2011, but someone got epochalypse dot com registered August 2007.


> Storage is cheap, bandwidth is cheap, so who cares?

This is a ridiculous assertion.

They're both cheap in the commercial sense, but neither cheap nor infinite in the UX sense. Time and space matter in the real world.

Google wouldn't have created WebP if there was no tangible, measurable cost benefit to using it over some alternative. Same goes for H.264 or HEVC or AV1, at scale bandwidth and storage are far from cheap. See the article on the FP today about Google's double-digit EXAbyte storage clusters with 50TB/s read volume each as a real-world example, there's nothing cheap about that.


Look up "CV dazzle" for the equivalent in the modern age, makeup effects to avoid facial detection / recognition.


By far the most common usage in the real world is in camouflaging prototype cars while being tested on the road https://www.bmw.com/en/automotive-life/prototype-cars.html

This way paparazzi can take pictures but it's hard to distinguish the shapes.


I think they also sometimes wrap polystyrene blocks under the camouflage too, so that particular curves on e.g. the wings or nose etc are altered by virtue of the camouflage having to confirm over that too.


I was about to say, dazzle camouflage seems just about perfect for doing 3d scanning on, so many nice high contrast areas for measuring stereo disparity!


Yeah absolutely this. I think in "the old days" a decade or two ago that sort of thing would have been largely out of reach to all but the most determined/well-founded adversary (I'm thinking corporate espionage, magazines etc, not nation states checking out the new Merc etc).

But now probably pretty much anyone in their bedroom could do it in a few hours. Literally next post after this one is for https://vgg-t.github.io/


That's really interesting. The times I've seen Toyota street testing pre-release cars, they were not disguised whatsoever, and had unmissable "factory" number plates


I've seen Mercedes-Benz test their car in camouflage even though the car was already unveiled. I guess they didn't wanna go through the effort to unwrap it. They were also a long way from Germany (with German plates).


I'd say the most common usage in the real world is click-bait surveillance fear articles discussing CV-Dazzle and the entire surveillance state being erected. The theater around all this is as much "it" as the things themselves.


Buddy Peter Thiel hangs out in the white house and provides Palentir services to law enforcement that they would not be allowed to do themselves without a warrant.

The surveillance state is here


I've seen plenty of these cars around Stuttgart and Munich. These patterns make it surprisingly hard to discern details in their shapes. Add to that the fact that early prototypes are deliberately padded to obscure their actual design and there's virtually no way to tell what the final production car will look like when you see these on the road.


You can see these cars (called "Erlkönig") all the time when driving near car manufacturer headquarters, and often also elsewhere on the Autobahn.


The car manufacturers do this for the coverage (pun intended). It probably also feels cool if you are on the team.


I remember that when it first came out. I get it’s a theoretical or fashion type thing, but the concept seemed flagrantly absurd to me. Block automated facial recognition in a way that in turn makes your face instantly recognisable in any crowd…


I've heard this as a reaction to the strategy before. "Now you're much more recognizable!" Well, yes and no. You're identifiable in the sense that you're unique among people in a crowd. But that equivocates between two different senses of identify. There's nothing actionable about looking at a person who looks different and saying "well they look different." That doesn't attach to any database or anything.

Meanwhile, positive facial identification attaches to all kinds of legal and intelligence infrastructure. Now, you can be charged with crimes, have a warrant executed against you, can be accused of supporting terrorists if you show up to a protest, etc.

I suppose I don't think the criticism is wrong, but it seems to presume that this is new information not previously understood rather than an intentional calculated risk.


Especially at the time it came out, surveillance footage was mainly going to be reviewed by mark I eyeballs, so the inability for computers to notice where a face was is going to be way outweighed by the person being sooo much more recognizable to a human.

If you don't think there is a disadvantage to looking different in a protest, think about the "qanon shaman" from 1/6 him looking different totally made him more of a target to being identified.


I'm struggling to understand how this is responsive. Unless those "mark 1 eyeballs" are a positive identification of a specific person, you're repeating my own observations back to me. You can conceivably be noticed in a crowd, but not positively identified.

I don't think "camouflage" fits any definition of what Qanon Shaman was wearing, either in a general sense or in the tactical sense we're talking about here.


so first off, if you are noticed in a crowd but not identified, that might single you out to be pulled from the crowd.

Also if you have a distinctive face paint then your image might be shared more, or just noticed more in the images that are shared to give more opportunities for people to recognize you or to remember your face to be recognized latter.

Also having a distinctive face would make it easier to track in different sets of footage especially when the technique was originally demoed in 2011.


I understand the mechanism you're tracing, but it feels like there's a category error here. Everything you're saying hinges on the circumstantiality of human reaction and interactions, which is extremely hard to model in a credible way and easy to become colored by subjective biases informed by things like TV and movies. Those channels of recognition and reporting that would lead to positive identification, are nebulous idiosyncratic and depend too much on speculation.

It's not to say it wouldn't ever happen, but there's an order of magnitude difference between that and guaranteed positive identification which is what informs the calculated risk.


A hat with infrared LEDs aimed out, such that there was a torus of light around your face. Invisible to humans (generally), only visible to cameras.

It won't "work right" on cameras that have permanent IR filters. Maybe. I haven't tested this in years.

I have a feeling that IR of the correct strength and frequency would be dimly visible to humans, though. Similar to cameras with monochrome night vision via IR LED.


One only needs 1 or two LEDs near their face, but they need to be blinking in short irregular intervals. Cameras have mechanical controls for controlling focus and the amount of light they capture, and that can be attacked with these irregular blinky LEDs that cause the camera to try to adjust to the bright illumination from the LED, but then it is gone before the adjustment is complete, but then it is on again, then off again. The result is a person that never is more than a grey silhouette.

I worked in enterprise FR, on one of the globally leading systems, as the lead developer. That scenario defeats pretty much all FR when from a single camera. It can be mitigated with multiple camera views, which few FR systems are setup for multiple quality views at every key location.


Interesting. I bet a candle-flicker LED or two in series would add a nice bit of random (or psuedo-random LSFR?) AM noise to the IR LEDs.


That would only maybe work for automated tracking; if someone wants to get the image of your face, they should be able to do it in post, unless the recording quality is shitty - the tiny variations in brightness might contain just enough information to reconstruct the face shape with a little filtering.

(Now I wonder, how narrow-band such IR LED is, and if it could be made to emit a single frequency so sharp, it would create funny diffraction patterns off cameras' surfaces and lens imperfections, clobbering the high-frequency components of the image...)


And it is trained FR algorithm specific, so more than useless in the real world where one does not know what FR system is in use.


> As an analogy, driving a car is dangerous. Whenever I drive, I could easily kill someone. But the government doesn’t force me to submit a driving plan any time I want to go somewhere. Instead, if I misbehave, I am punished in retrospect. Why don’t we apply the same policy to research?

"We" decided that Tuskegee was bad enough that it should be stopped before harm is done, and that there is no appropriate or sufficient "punish[ment] in retrospect" for the fallout.

The government makes you get a license to drive at all, then "drive a Pinto" versus "drive a Trabant" are similar enough that they don't require more info. They require you to get different licensure to drive a bigger truck where you could potentially cause more harm, or to drive an airplane. In this analogy the IRB is the DMV/FAA/whatever, and you're asking for permission to drive a tank, a motorized unicycle, a helicopter, an 18-wheeler or a stealth fighter. You don't get a Science License rubber stamp because that's like getting a Vehicle License - the variation in "Vehicle" is big enough that each type needs review.


>"We" decided that Tuskegee was bad enough that it should be stopped before harm is done, and that there is no appropriate or sufficient "punish[ment] in retrospect" for the fallout.

The thing is, although you and the linked article seem to be associating IRB approval just with human studies, these days you need it for mouse studies.


There are different IRBs to review animal research[1]. I believe it created for an ethical framework around the use of animals in science. Same thing: what are "we" accepting of when it comes to research of this nature?

[1] example: https://animalcare.umich.edu/institutional-animal-care-use-c...


In this analogy, you're also asking permission from the IRB to ride a bike or skateboard.


In the context of universities, the equivalent of riding a bike or a skateboard here is having people fill out surveys after events, or piloting new services offered by a student health clinic.

(I guess the point of analogies like these are to force us to sweat the details and examples.)


Another equally good comparison is, say, the existence of flight plans for private pilots, flight logs, etc.

There are escape hatches, too: I doubt many rural Alaskan pilots worry (or need to worry) about these things.


Not sure what your analogy is here; private pilots typically don’t need to file any flight plans except in specific circumstances.


Yeah, odd choice to use cars in that analogy when you very much need advance approval before you're allowed to drive a car.

A driver's license is more like a medical license than IRB approval.


Indeed, and a person with a medical license is able to do much, much more damage than the people who need to ask IRB for permission to do research.

If your point is that we could replace IRBs with some sort of a researcher license, that you need to obtain before being able to do studies that today require IRB approval, then I support it, because while not ideal, it improves over the status quo.


I do think that there’s a debate to be had on how reactive or proactive we should be in ensuring the ethical practice of… well, anything involving significant investment. As a wage case, reactive systems like malpractice suits or board actions against physicians aren’t easy to navigate if you don’t have many resources.


When HN was an entrepreneurship oriented community, the overwhelming attitude had been that it’s better to ask for forgiveness than permission. That’s because even if you’re doing something clearly good and morally unimpeachable, having to ask for permission slows you down and invited bikeshedding. Now that HN is a general industry forum, the attitude is more favorable towards preventing risk at a cost of reducing amount of value produced.


Every time you get one of those surveys rank them at zero, then add "Net Promoter Score is a flawed vanity metric and shouldn't be used for business purposes" in the comment box. Sometimes I link the Wikipedia NPS "Criticism" section as well.

Most places don't care about the results from an actual customer service perspective. The above gets crickets, not even an auto responder.

For companies that do care (tiny startups, mostly) I've gotten IMMEDIATE personal email responses from CEOs and founders asking what they can fix for a zero NPS. That's a great place to link the criticism section if not done previously, and to provide useful, raw feedback on what you love/hate about their products.


This tanks the evaluation of any individuals you interacted with while dealing with the company, which can impact their pay or even push them towards getting laid off for low performance. So I'd advise caution when applying this particular idea, since many employers use these surveys to decide who to fire.


That's "negotiate with terrorists" logic. I'm not going to pretend to take some company's bullshit seriously because they implicitly threaten to fire their employees at random if I don't.

(I do advocate for laws against arbitrary firings and encourage employees to unionise and/or move to jurisdictions with strong labour laws).


It's very likely zero or positive impact on the decompression side of things.

Starting with smaller data means everything ends up smaller. It's the same decompression algorithm in all cases, so it's not some special / unoptimized branch of code. It's yielding the same data in the end, so writes equal out plus or minus disk queue fullness and power cycles. It's _maybe_ better for RAM and CPU because more data fits in cache, so less memory is used and the compute is idle less often.

It's relatively easy to test decompression efficiency if you think CPU time is a good proxy for energy usage: go find something like React and test the decomp time of gzip -9 vs zopfli. Or even better, find something similar but much bigger so you can see the delta and it's not lost in rounding errors.


Reminds me of microwaves - one of the early uses of microwave heating was to reanimate frozen hamsters:

https://en.wikipedia.org/wiki/Microwave_oven#Discovery


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: