Hacker Newsnew | past | comments | ask | show | jobs | submit | hackrmn's commentslogin

And don't forget _overdimensioning_. Vendors love this because volume scales cubically with increase in any one of width, height and depth -- they're not the ones carrying the phone, but they can pack more features into one, quite literally. FOSS vendors more so since they need more ground to compete on (hardware being older and price being high enough because of economy of scale).


Standard-size phone screens are easier to procure. This determines the dimensions. Same for batteries.


FOSS mobile hardware vendors already have a hard enough niche to target, "people who say they want small phones" is just fuel to the already burning fire for them. Each niche they add does not add the user base together, it multiplies the userbase percentages.


Why do they keep making them BIGGER and BIGGER? Our hands don't grow that fast, most adult males have been struggling using their phone with one hand. Only the vocal minority prefers to oversized phone-computer, most of us just want to use it briefly on the go before tucking it back into the pocket, without it tearing a hole in it (which my last two phones have done).

If anyone is listening -- can you put a cap on the dimensions? 5.5" screen is plenty, if I want the cinema experience I will either a) go to cinema or b) use some VR/AR device, for the rest of use cases, like watching a movie on a bus/plane/train, it doesn't weigh up against carrying a brick with you.


My complaint is also, why do they weight so much? Even phones with the same dimensions of an older one keep increasing their weight. This phone is 201 g, which has become the new normal so I can't complain really much but it's still not the phone for me. And it's about 170 mm tall, which it's huge but sadly normal in 2025.


I haven't thought about it as much as I have about sizes, but you do have an interesting point to ponder. I can only offer an explanation, but no consolation:

I guess electronics has gotten denser, and density for the same volume is what quite literally translates to larger weight. The density thing is because they're able to cram more electronics, as our fabrication technology inches forward (i.e. Intel/TSMC/Nvidia/etc trying to break the 1nm barrier for transistors).

Remember the old Nokia phones, where the plastic shell likely amounted to as much volume that a modern phone instead dedicates to the entire front camera device? The latter will weigh much more than the plastic, for the same volume. Now apply that to _every_ component in the modern phone, and the difference is multiplicative -- there's just more features in every cubic millimeter of the phone today. No wonder it's getting heavier.


I like large screens because I value having plenty of context visible, e.g. in a webpage or a conversation.

Also, don't forget the bigger batteries that large phones enable.


I don't but could you not forget that some people don't have a car. You can walk or use transit in proper cities. When I need a bag then it is not a phone it is a laptop without keyboard.


The context is the same though, regardless of screen size? The UI and/or UX doesn't change much when the screen is physically smaller? The resolution usually stays the same, and even if it shrank or grew, most apps wouldn't care as the libraries used to render their widgets are more or less "resolution invariant".

I mean I get what you're implying, I am just making sure I understand the meaning of "context" here. But if you have large fingers, smaller buttons obviously make the device harder to use, no two ways about it. However, in Android and iOS both, it's possible (for the user) to scale everything up, to help solve that very problem.

The bigger battery argument is a valid one too, but you have to keep in mind that most of the battery is consumed by the screen on average, and larger screen will eat more battery, so it's a bit like the rocket equation -- bigger rocket needs more fuel, more fuel needs more space and adds weight to the rocket, more rocket more fuel again and so on. In terms of batteries and rockets both, there's a golden middle there somewhere, I think. But it's a moving quantity since both screens and batteries are different -- OLED vs LED-lit LCD screen and LiPo vs LiOn for battery and so on. In short: I don't think a 5,5" phone (my preferred size) will suffer from shorter battery life, perhaps on the contrary (vs. a 6,5"). Especially considering that _large_ phones tend to be made _thinner_, since their ergonomics depends more on thickness (for the large width and height), perhaps becoming a problem with more than 8mm thickness, while a 5,5" phone can in fact be used comfortably even if it was 8-10mm thick, since it's smaller in the other two dimensions. That extra afforded thickness can directly translate to a battery that is as large or larger in terms of capacity as one for a 7mm "slick" 6,5" phone.


I had a phone where the top half of the touch screen broke, so I installed "quick cursor" to be able to access it. I still use it on my new phone since it enables me to control everything using only about 1/3 of the touch screen. This should really come built in to the OS, especially since the app requires some pretty aggressive permissions to work.


Hi, Quick Cursor dev here.

I completely agree with you, my app functionality should be built inside the OS because of better integration, privacy reasons, etc.

I just wanted to add that because of this permissions my app needs in order to work, I will never add the internet permission to Quick Cursor. I took this decission 5 years ago when I started the app because I understood the privacy risk, and my app will never have internet access permission.

In order for an app to have access to internet, it needs to have the android.permission.INTERNET added to its manifest, otherwise it won't work. This can be checked easily, there are some apps that shows you this info about your installed apps, or by manually looking at the AndroidManifest in the .APK of the app.


Thank you for making this!

I am using GrapheneOS, and I think this OS actually also allows you to explicitly toggle the Network permission off for apps that require it, but I did notice that it wasn't even present on the list to begin with :). I also like to disable Network for things like keyboard apps.


I'm glad you like it. That's an awesome feature that should be implemented into all Android devices.


I have to say reading the statement "requires some pretty aggressive permissions to work" sounds like there's a problem with Android permissions model. I mean, if the app needs permissions, one should normally assume it needs these permissions in order to, well, be permitted to do its work? In other words, a "good-natured" app should not need more permissions than it needs to work, and the last part is kind of a tautology. Either that, or Android has broken permissions model, which may apparently be too coarse -- as in you need "access to Internet" for auto-update to work, despite auto-update normally being done by Google (when a Google-forked Android) over a secure channel etc.


I agree about the principle of least privilege, but the problem is that almost any accessibility app must basically be able to simulate user input to function, i.e. for a cursor app to actually provide a cursor, it must have the permission to activate any UI element on the screen.

I trust Quick Cursor, but I shouldn't have to - since basically every smartphone now is too big to use with one hand without having to shuffle it around and risk dropping it, I think the cursor feature should be built into the OS.


And can you give a number on the "vocal minority"? Because companies usually sell what customers want and if the majority of the phones on the market is big, then that's what people want.


Hmm, or they fabricate the demand so they can fulfill it. SUVs anyone?


As an outsider, how do they do that?

I am guessing

- put best specs in largest devices (fomo-ish, status symbol) - put highest cost on largest devices (status symbol) - um? not even create smaller devices would also do it I guess?


I mean, the suv case is easy:

- market SUV’s.

- stock dealerships with mostly SUV’s

- complain that nobody is buying non-SUV’s (they can’t, it’s only suv stock),

- stop selling non-SUV models.

- complete transformation into indeterminate, indistinguishable car brand no.3564.


also, government policies that reduce the price of SUVs


The problem isn't that this hypothetical vocal minority is completely imaginary, but that consumers will repeatably gravitate toward the biggest and most glitterliest product. Only few realizes it's not what they want.


The problem isn’t that the majority of phones are big, it’s that virtually all are big and heavy. There is no modern, properly supported smartphone with 4.x” or 5.x” diagonal or below 150 grams anymore.


In all frankness, I think this is the legandary "if people wanted a faster horse..." statement of Henry Ford -- consumers don't always know what they want, and I know quite a few who couldn't confidently answer the question "why did you buy a 6,5" iPhone?" with anything else but "I have used iPhone all my life and this is the size they sell", meaning the consumer doesn't choose much, the choice is to buy a newer iPhone. The simplified argument that goes along "phones are getting larger because consumers want larger phones" is indeed only that -- a very simplified way to look at it. There's much more going on there.

It's very similar to smart TVs. Yes, most people do prefer smart TVs, but vendors use it very successfully to sell inferior displays (poor color, poor contrast etc), to compensate and to pull more selling margin, since that's how the consumer functions (being utterly unable to quantify display quality for an uncalibrated TV). Anyway, I am digressing -- the point of my comparison is that it's complicated and not nearly as simple as "consumers want larger phones / TVs with slow menus and shitty picture as long as there's Netflix in there".


Do smaller phones still exist?

Genuinely asking. I’m on iPhone, which hasn’t changed form factor in quite a while.


Yes, but no "flagship" devices from mainstream brands, only specialty/novelty stuff. The last <6" flagship I'm aware of was the Asus Zenphone 10 in 2023.


Yes and now.

Technically yes: there is iPhone 13 Mini and in Android world there is 2 or 3 Unihertz models and some "no-name" Chinese Aliexpress brands (Cubot has some small model, AFAIR, and there is several even more no-name offers).

Realistically no. All these Android models are underspecced. Old cores (8+ years old), small screen resolutions (small in PPI, not like small as screen proportion to big ones), small amount of RAM and storage (latest Uniherz is happy exception in this area, but not in the others), very bad cameras, very short OS update period (if ever).

iPhone 13 mini is Ok-ish (my wife uses one): camera is still very poor, but all other is usable.

Android is worse. If all you need are phone calls, and messaging with Telegram/Whatsapp/Signal it is Ok. But if you need good camera, good browsing experience (many open tabs) or something specific you are out of luck. Even Google Maps could be sluggish. Plus zero-days in old Android versions.

Good cameras is my pet peeve: good ones go only to flagship models and maybe sub-flagship ones (like, flagship and sub-flagship can be differentiated by addition of tele-module, which is most useful for me).



I have a question to you (I've tried to google this up to no luck): which is last official update for Android for Jelly Star? It was released with Android 13, did it get 14? 15? 16?

OS updates looks like pain point for all these non-mainstream phones to me, am I right or it is wrong impression?

Thank you.


You are right, the phone is still on Android 13, I don't think the updates will come


It is what scary me: System needs regular updates not for new shiny features, but for fixed bugs, zero days, etc.


They did try bringing back smaller ~5.5" phones, and hardly anybody bought them.

https://www.tomsguide.com/news/iphone-12-mini-sales-a-disast...

https://www.macrumors.com/2022/04/21/iphone-13-mini-unpopula...

I think the vocal minority is the other way around.


I think your argument is flawed -- perhaps rephrasing it to say is that _Apple_ tried bringing back a smaller _iPhone_ and _presumably_ few _existing_ customers bought them, would have made a better one? Because I would assume most of iPhone buyers are either _existing_ iPhone users, or people who swear to Apple software (iOS, MacOS) so this is about being able to read the statistics correctly.

Add to the above that iPhone "mini" might have been slower or just "worse" and it wasn't just the screen that was reduced in size, so the word of mouth might have been that the phone is simply worse, and that contributed to poor sales.

There's no way of telling how a 5,5" phone would fare until there's consistent prolonged feature-parity based sales of such phones that are otherwise identical to other offerings by the same brand, across multiple brands (if I am a die-hard Fairphone customer, I am not buying an iPhone regardless of screen size) to help gather proper statistics.


As the article points out, the iPhone 13 mini sold half as much as the other iPhone 13 models, while competing with the iPhone SE which was the same size at half the price. That isn’t exactly terrible.


The lowest alternative, 13 Pro Max had double the sale volume (at 1.5x the cost), while the VAST majority chose the 6.1" models instead, how does that support the argument the desire for a >5.5" phone is from a vocal minority? The articles themselves directly state the sales of small models are poor, it's not the other way around no matter how you spin the charts.

The relative preference for the larger unit has increased over time as well, e.g.: https://www.macrumors.com/2025/05/28/iphone-16-q1-2025-best-...


The markets for smaller phones and the larger Pro Max models look like they’re roughly in the same order of magnitude. It doesn’t look like a negligible demand that is not worth serving.


The larger market has 2 phones (Pro and Pro Max) which release each generation, separate from the base model being >6". That one needs to compare a specific subset of a larger model type and say it's somewhere within the same factor of 10 is exactly why Apple stopped making the mini.

The articles are literally about how bad the sales were before Apple stopped making minis. There is no reasonable way to conclude that means they were actually worth serving.


Poorly optimized apps need big batteries.


It's been a while I've had to sit down applying or writing a parser or a parser generator, but it being/offering a "simple recursive descent" parser, does that mean left-recursion is a no-go?

In my opinion, parser libraries/frameworks indeed are all mired by the usual suspects which make adoption painful:

* Must learn another grammar language which for some strange reason I suspect has to do with "tradition", must reside in _a file_ -- as opposed to just be expressed with code (i.e. `bin_expr = Concatenation(Ref('expr'), bin_op, Ref('expr'))`); if a BNF language _is_ used for grammar, it almost always is used with some non-standard syntax for helping resolve shift/reduce errors, etc -- which for me puts it into the same "this is not needed" category

* Defined by the kind of parser that is generated, so implying you have to know parser theory in order to know what languages you will never be able to parse with said library/framework; made even worse when some kludge is added with extensive documentation on how to get out of the predicament because "the parser cannot theoretically handle the grammar/language but it otherwise is really great because it uses ABC kind of parsing which is why we chose it" -- the impression it gives a person who knows parsing enough to know they need to construct a grammar and that the grammar may feature ambiguities, is that they have to learn more parser theory; when you learn more parser theory, you usually just implement your own parser unless you need to parse e.g. C++, admittedly; for case in point, see my remark on the "recursive descent parser" being used with Lexy

To be frank, I like the addition of yet another parser generator -- the more the merrier, because contrary to that one earlier statement, that "parsing is a solved problem", I think it is not -- the theory has substantial headway on the practice, meaning that in theory it is [a solved problem], but in practice it is not, in my experience.


I assume yours is a general comment about parser generators and not specifically about Lexy, which in fact contradicts your first bullet point by providing templates to express your grammar directly in the C++ code.


Indeed, on the point of parser generators utilizing "text"-based grammar [files], Lexy is not among these.


The practice also lacks the answer to "when do you skip all the libraries and go for a hand rolled parser which gives you better control over errors, rollbacks, decision trees etc." step.


Testing by "simulating" button presses and other actions like that, including inspecting pixels, is part of so-called "black-box" testing, and offers merit(s) of its own. At least because software is used by people who click buttons which may modify pixels, and these people are not concerned what your model is, they don't even know anything about the way you may have implemented the latter. In the end everything is run on a fairly RISC-y CPU, it either works or it doesn't (from user's perspective) -- replicating the user's workflow is useful in that it it uncovers issues that matter to users and thus normally affect your bottomline.


In my experience, writing readable code and writing code that behaves correctly (fulfills the contract/requirements without hiding potential faults) is often mutually exclusive -- most people end up doing one or the other. This is related to the never-ending functional programming vs. "traditional programming" (a target in motion, largely OOP or in the very least "whatever is taught at the graduate schools"), since the former, in contrast to the article which pretty much _assumes_ the latter, doesn't even facilitate "variables", literally or in informal sense (things you can "assign to", whether changing or not).

Anyway, I happen to belong in the latter category according to most -- the longer I have been doing this, the more I lean into the purely functional style, almost mathematical vigor, because I have learned how much (or rather little) margin there is to introduce subtle errors once you have actual _variables_ that may change freely, which start to encourage you to do other things which in the end contribute to lack of correctness, readable or not.

Now, you may blame people like me, and I cannot blame you for not having the cognitive load capacity to understand some of the code I write "succinctly", but my point is that for all the merit of the article (yes, I agree code is read much more often than it is written, lending value to the "readability" argument), it doesn't acknowledge the fact readability and correctness are _in practice_ often mutually exclusive. Like, in the field. Because I wager that the tendency is to approach a more mathematical expression style as one becomes better at designing software, with adversarial conditions manifesting in terms of bugs hiding in mutability of state and large, if "simple", bodies of functions, classes (which have methods you cannot guarantee to not mutate the object's state).

We need to find means to write code that is readable but without compromising other factors like mutability which _too_ has been shown to compromise correctness. What good is readable software that never manages to escape the vortex of issues, driving the perpetually busy industry "fixing bugs".

At my place of work, I obviously see both kinds of the "mutually exclusive", and I can tell you without due pride and yet with good confidence, people who write readable code -- consisting of aliasing otherwise complex expression with eloquently named variables (or sometimes even "constants", bless their heart), and designing clumsy class hierarchies -- spend a lot of subsequent effort never being able to be "done" with the code, and I don't mean just because requirements keep changing, no -- they sit and essentially "fixup commit" to the code they write, in perpetuity, seemingly. And we have select few who'd write a code-base with as few variables as possible, with a lot of pure function -- what I referred to as "mathematical programming" in a way -- and I never hear from them much offering "PRs" to fix their earlier mishaps. The message that sends me is pretty clear.

So yeah, by all means, let's find ways to write code our fellow man can understand, but the article glosses over a factor that is at least as important -- all the mutability and care for "cognitive load" capacity (which _may be_ lower for current generation of software engineers vs earlier ones) may be keeping us in the rotating vortex of bugs we so "proudly" crouch over as we pretend we are "busy". I, for one, prefer to write code that works right from get-go, and not have to come back to said code unless the requirements which made me write it the way I did, change. On a very rare occasion, admittedly, I have to sacrifice readability for correctness, not because it's inherently one or the other, but because I too haven't yet found the perfect means to always have both, and yet correctness is on the absolute top of my list, and I advocate that it should be on top of your as well, dare I to say so. But that is me -- perhaps I set the bar too high?


Current state of things is a mixed bag, in my opinion. XSLT wasn't perfect, and XML required a lot of repetition and neither got the amount of scrutiny both deserved to advance past their own inconvenience. But a lot of what we have today has just as many drawbacks, and that _despite_ WHATWG and Google hawking over it all. XML had namespaces, which alone were quite a potent feature that allowed to manage _semantics_ of data with more rigor and flexibility -- compared to the in my opinion amateur-ish "custom elements" HTML 5 approach, and how Apple refuses to implement parts of the spec., because they "don't believe in inheritance" (I am being liberal but those who know know). Then there is the HTML 5 parser itself -- "for the masses" -- with its idiotic and incomprehensible rules that are element specific, for which reason noone ever remembers these. Forgot a `</p>`? No problem, you can discover your omission after you've shipped! HTML 5 parser never aborts!


Is XSLT enjoying the equivalent of Streisand Effect? Ever since news came out of Google wanting to rid Chrome of XSLT support, there has been a number of XSLT-related news here. I am not complaining, I think XSLT deserves a second life, it hasn't had the scrutiny it deserves, nor its "15 years of fame".


That’s basically what happened with MathML a decade ago. Chromium had briefly had an implementation (M24 alone!), Firefox had an excellent implementation, and Safari had a decent implementation; there was very little usage. Chromium proposed removing it. There was much kerfuffle. When the dust settled, MathML won. MathML Core was defined, Igalia contributed a robust MathML implementation to Chromium, Safari tightened their implementation up, and people started actually using it a lot more.

Around the same time, Google tried to deprecate SMIL (SVG animation tech), which would probably have led to its removal after some time. This also failed, and it’s used more than it was then (though CSS animations are probably quite a bit more popular, and have over time become more capable), probably because now all engines support it robustly (MSHTML and EdgeHTML never did).

I hope that we’ll get a more formal commitment to XML and XSLT, and XSLT 3 in browsers, out of this.


As an aside, I note that MSHTML supported “HTML+TIME” (anyone remember Vizact?).


It wasn't Google. The original proposal came from Mozilla, and apparently all the browser vendors want to be rid of it.


Browser makers trying to kill XSLT is not new: Chrome had intents to deprecate and remove XSLT in 2013 and 2015 (which both failed).

Each time so far, there’s been movement for a while, significant pushback, and the one who was championing removal realises it’s hard, and quietly drops it. Smaug from Mozilla bringing it up at a WHATNOT meeting a few months ago looked like it was heading the same way, a “yeah, we’d kinda like to do this, but… meh, see what happens”. Then a few months later Mason Freed of Google decided to try championing it. We’ll see where things go.


Which part of what I said was false?


Why should the presence of replies to your comment mean that they don't agree with you? Sometimes replies add more context or are simply elaborated me-toos.


I think that's the exact point the parent commenter is trying to make. One person said Google has wanted to get rid of XSLT for years, which someone else responded to saying that it was Mozilla who made this proposal, and then someone else responded with details about Google's history of wanting to get rid of it previously, at which point the question around what part of the second comment been untrue entered the picture. That question doesn't make a ton of sense, because if the reply to the second comment was assumed to be disagreement, the same logic would presumably apply to the second comment itself, so the question "what part is untrue?" would be just as valid to ask about the "disagreement" with the original statement that Google has wanted to get rid of XSLT for years.

Of course, I could be making the same mistake in reading your comment as expressing disagreement with the one you're responding to! If that's not the case, then I'll happily accept my mistake so the chain can hopefully stop here.

(edit: I'm realizing now that this definitely is my own mistake, as I misread which comment this one was replying to. I might need to invest time in finding a new app for reading HN on mobile, since the indentation levels on the one I've been using clearly aren't large enough for my terrible eyesight...)


This is a good comment and there is an answer to your question. I know because I react the same way sometimes. It's about insecurity. They're interpreting it as negative by default either because they've been treated poorly before or because they have low self-esteem. Either way, their state shapes their expectations.


There's a perverse irony to responding like this (besides the fact that no one, whether implicitly or explicitly, said anything like, "That's not true"), since the comment you replied to merely mentioned "Google wanting to rid Chrome of XSLT support", which you indicated was untrue. Which part, exactly?


> came from Mozilla

Maybe they should stop wasting money on pointless, proprietary side tangents before they set out to break standards.


It’s a parallel universe Lisp/Rust as far as HN goes.


I've not heard of Jane Street, but looking at the site, it seems they're quite the tech-savvy group for an _asset trading_ company. And reading the article, they apparently "trade" (pun intended) in quite a few different things from CSS to OCaml, which is really nice to see for a place that doesn't directly deal with software as their [business] product (or did I misunderstand their market?).


They are basically the main company driving OCaml development outside INRIA, and the main authors behind Real World Ocaml book (https://dev.realworldocaml.org), which followed up a similar one for Haskell.


If you want to learn about Jane Street:

1. Watch Stand-Up Maths and Numberphile videos on YouTube and do not skip the sponsor advertisement readings, e.g. https://youtube.com/watch?v=eqyuQZHfNPQ .

1. Read https://news.ycombinator.com/item?id=44480916 .


Jane Street is a technology company first and foremost whose main product is high frequency training, which is similar to mang of these neo-hedge funds (e.g. HRT, Citadel). Which is contrasted to the more established asset managers (e.g. BlackRock, Vanguard, BRK, etc) which I can say from experience are corps tied down in bureaucracy.




> which is really nice to see for a place that doesn't directly deal with software as their [business] product

Part of Jane Street's business is HFT, and software is the "product" of HFT firms, because without extremely low-latency software (and hardware) they cannot make money.


They are also a sponsor of ZuriHac (https://youtu.be/Jzh3e-I4j-w?feature=shared&t=817)


In software engineering the statement "interfaces, not implementations" has been used for a long time (certainly at least Robert "Uncle Bob" C. Martin started teaching), which is a generalization on the "we don't break userspace". In essence it cooks down to declaring an interface without announcing or depending on the implementation. With OOP languages like C++, a code base would aggressively use interfaces as types, never concrete class types (which implement the interface), so that it can make it easier to reason about how and whether the program behaves when one implementation of an interface is swapped for another.

With Linux, which is a C codebase by and large, they load and pass pointers to structures to kernel procedures which can do as they please -- as long as the documentation on said structures (which usually says which fields and how are retained with which values and so on) remains unchanged. That's their "object oriented programming" (yeah, I know Linus would likely have hated the comparison).


I've always been a staunch opponent of using `/v1/` kind of element in the URL. I understand its function and the convenience it allows for "versioning", but I grew up with HTTP and I always liked how it insisted the URL is well, the _location_ of the _resource_, and `/v1/` just muddies things up being in the URL -- it's obviously not a version of the resource that `/v1/` would be indicating, but of the API implementation, which is the first telltale sign that it's an architectural "blunder", and these compound in my experience, always and invariably given enough time.

If the consumer wants to consume a specific version of the API, the means to do so can be implemented with an alternative domain name, or -- even better (who wants to maintain alternative domain names) -- with a request _header_, e.g. `X-API-Version: v1` (or another one, perhaps a standardised one).

In any case, the `/v1/` thing is something of cargo cult programming -- I remember someone proposed it a good while ago, and it's been adopted since without much afterthought, it seems. It doesn't make sense to debate pros and cons of REST/HATEOAS if your resource identifier scheme is poorly designed, IMO.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: