Hacker Newsnew | past | comments | ask | show | jobs | submit | Pamar's commentslogin

Not sure I understand your point about

Everybody already has local regional tickets anyway. And most people can't be in more then one place at the time anyway. And most people stay in the same region most of the time anyway.

I live in Rostock. So if I want to go to Berlin or Hamburg (you know, where stuff like actual airports are) I am crossing "regional borders" even if it is a 200-250 km trip to each city


Most people don't use regional services to travel long distances. And you pay for proper inter-city services.

The point is, if you are in Hamburg, you are no longer in Rostock. So you are only using regional services in exactly one place.


At least from Rostock to somewhat closer destinations you have both options. There's a bi-hourly IC to Hamburg or Berlin and another bi-hourly RE towards the same destinations. They're not terribly different in terms of travel time, but one is a regional train and one is an inter-city train.

Sure, long distances (I had to travel from Rostock to Tübingen last weekend) are typically not taken with regional trains (although you technically can; I did that as a poor student a few times, it just takes 16 hours instead of 10), but over medium distances (around 2–3 hours) you often have both options.


About non-replaceable batteries: from what I understand, if a battery can be replaced by any random device owner you must design it with a robust cell to avoid risk of it being punctured, breaking, being crushed.

And therefore you have more shell, less actual battery and therefore it lasts less.

This does not mean that I believe this was done exclusively for altruistic reasons. More like: this will result in a slightly better experience for the user... and more revenue for Apple. So let's do it.


I've worked in consumer electronics, batteries are built in because reviewers will endlessly trash a product that is just 1mm thicker than anything apple puts out, and they fawn over apple because the products are so thin.

If anyone releases a product that is just a tiny bit thicker than last year, except headlines like "new super-thick phone doesn't fit in pockets, causes back problems".

A small exaggeration? Not by far, reviewers nasty about device thickness.

Then 70% of people shove a case on and it really doesn't matter.

There are good water ingress reasons for non-replaceable batteries, making a device water proof and have a replaceable battery does add a good deal of thickness.

Anyway, you can get a battery replaced at a phone shop for a reasonable rate anyway, so IMHO it isn't as big of a deal now days.


We need to stop making products for reviewers.


No one wants to, but that is how many consumers decide on what to buy. It is especially how early adopters tuned into the review scene for their favorite products decide what to buy.


I think that what erased "programmer vs computer illiterate" dichotomy was BASIC in the 80s.

I've met lots of "digital natives" and they seem to use technology as a black box and click/touch stuff at random until it sorta works but they do not very good at creating at mental model of why something is behaving in a way which is not what was expected and verify their own hypothesis (i.e. "debugging").


Agreed. And I feel it fair to argue that this is the intended interface between proprietary software and its users, categorically.

And more so with AI software/tools, and IMO frighteningly so.

I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.

All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?

Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.

Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).

It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.


I am from that era, so I might add something that perhaps is not obvious at all nowadays.

The microcomputer explosion gave birth to an large number of actual paper magazines and at least 50% of their content were... actual source listing you had to manually retype. Basic was already fragmented in a billion different flavors and dialects (especially if your program had any kind of graphics) so the more ambitious user could also try their hand at translating a listing from - say - TSR-80 to Apple Basic.

In any case you were directly exposed to the actual source code, and tweaking or experimenting with it felt very natural.


What about Cara, Vero and MeWe?

This are just the last three Social Media I subscribed in the past and range from Stagnant to Pretty Much Dead.

I suppose that the problem is that if you already have 1000+ followers on, say, Twitter or IG you try posting the same stuff in parallel on both... after 1 month of doubled effort you notice that your followers on the new platform is an order of magnitude smaller... you want to stop double posting because it is too time consuming. Guess which one you will opt out of?


IMO even Bluesky borders on “dead”. Not from an activity standpoint, but from the standpoint of “what the hell are people posting here”.


What about "microservice", then?


This is perhaps a case where miscommunication saved an entire industry.

I once got a new hire from Uber and for months on end his complaint was that “the services are too big”.


I once wrote a small class at work and by the time I left it was like over 8k lines long. People jokes it was my fault I called it HelperUtil instead of something more descriptive. It was a dumping ground for all the stuff people didn't want to think about. I wonder if something like that is possible in the microservice world?


It probably wasn't a joke. If you call something HelperUtil, it will become a dumping ground. That's a learnable lesson around naming, a mistake, but it's not learnable if it keeps getting described as a joke.

C# accidentally solved this problem with extension methods, these little helper utils at least get grouped by type and not in one humongous file. Or maybe that was part of the design team's intention behind them all along.

And because they're static you can easily see when services or state are getting passed into a method, clearly showing when it should in fact be some sort of service or a new type.


You’ve never seen `public static class Extensions` in a project named Something.Shared?


In the microservices world this is the monolith itself sitting in the center :/

Even in architectures that start as distributed, I’ve seen the “involuntary monolith” arising.

Way too common, unfortunately.


Of course, it just becomes the HelperUtilService!


Same thing, a typical OS has tons of microservices talking over OS IPC.

Sun RPC was microservices.

But understanding they are several decades old concept isn't cool, doesn't sell conference tickets, books and consulting training.


I definitely agree with that, at least this is how I use chatGPT in 99% of the cases.


I am on the move so I cannot check the video (but I did skim the pdf). Is there any chance to see an example of this technique? Just a toy/trivial example would be great, TIA!


For the Monte Carlo Method stuff in particular[1], I get the sense that the most iconic "Hello, World" example is using MC to calculate the value of pi. I can't explain it in detail from memory, but it's approximately something like this:

Define a square of some known size (1x1 should be fine, I think)

Inscribe a circle inside the square

Generate random points inside the square

Look at how many fall inside the square but not the circle, versus the ones that do fall in the circle.

From that, using what you know about the area of the square and circle respectively, the ratio of "inside square but not in circle" and "inside circle" points can be used to set up an equation for the value of pi.

Somebody who's more familiar with this than me can probably fix the details I got wrong, but I think that's the general spirit of it.

For Markov Chains in general, the only thing that jumps to mind for me is generating text for old school IRC bots. :-)

[1]: which is probably not the point of this essay. For for muddying the waters, I have both concepts kinda 'top of mind' in my head right now after watching the Veritasium video.


> From that, using what you know about the area of the square and circle respectively, the ratio of "inside square but not in circle" and "inside circle" points can be used to set up an equation for the value of pi.

Back in like 9th grade, when Wikipedia did not yet exist (but MathWorld and IRC did) I did this with TI-Basic instead of paying attention in geometry class. It's cool, but converges hilariously slowly. The in versus out formula is basically distance from origin > 1, but you end up double sampling a lot using randomness.

I told a college roommate about it and he basically invented a calculus approach summing pixels in columns or something as an optimization. You could probably further optimize by finding upper and lower bounds of the "frontier" of the circle, or iteratively splitting rectangle slices in infinitum, but thats probably just reinventing state of the art. And all this skips the cool random sampling the monte carlo algorithm uses.


Piet: Programming language in which programs look like abstract paintings (2002) - https://news.ycombinator.com/item?id=40141777 https://www.dangermouse.net/esoteric/piet.html

In the sample programs there's a big red one... https://www.dangermouse.net/esoteric/piet/samples.html

There's also the IOCCC classic https://www.ioccc.org/1988/westley/index.html


Sorry, I should have been more specific maybe: I do know about Montecarlo, and yeah, the circle stuff is a more or less canonical example - but I wanted to know more about the Markov Chains, because, again, I only know these in terms of sequence generators and I have some problems imagining how this could "solve problems" unless your problem is "generate words that sorta sound like a specific language but it is just mostly gibberish".


I’ve always loved this example. I implemented the Monte Carlo pi estimation on a LEGO Mindstorms NXT back in high school. Totally sparked my interest in programming, simulations, etc. Also the NXT’s drag-and-drop, flowchart programming interface was actually a great intro to programming logic. Made it really easy to learn real programming in later on.


A Pseudorandom Number Sequence Test Program - https://www.fourmilab.ch/random/

    Monte Carlo Value for Pi

    Each successive sequence of six bytes is used as 24 bit X and Y co-ordinates within a square. If the distance of the randomly-generated point is less than the radius of a circle inscribed within the square, the six-byte sequence is considered a “hit”. The percentage of hits can be used to calculate the value of Pi. For very large streams (this approximation converges very slowly), the value will approach the correct value of Pi if the sequence is close to random. A 500000 byte file created by radioactive decay yielded:

    Monte Carlo value for Pi is 3.143580574 (error 0.06 percent).


TIL, thanks! I asked Claude to generate a simulator [1] based on your comment. I think it came out well.

[1] https://claude.ai/public/artifacts/1b921a50-897e-4d9e-8cfa-0...


Righteous!


Time to trot out a recent experience with ChatGPT: https://news.ycombinator.com/item?id=44167998

TBH I haven't tried to learn anything from it, but for now I still prefer to use it as a brainstorming "partner" to discuss something I already have some robust mental model about. This is, in part, because when I try to use it to answer simple "factual" questions as in the example above, I usually end up discovering that the answer is low-quality if not completely wrong.


Am I the only one that remembers how Microsoft tried to convince everyone to adopt .Net because this way you could have teams where one member could use J#, another use Fortran.Net (or whatever the name was) and old chaps could still contribute by writing Cobol# and everything would just magically work together and you would quadruple productivity just by leveraging the untapped pool of #Intercal talent out there?


Wish I could go back to a time when I believed stuff like this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: