Hacker Newsnew | past | comments | ask | show | jobs | submit | dexwiz's commentslogin

I think AI as a tool versus AI as a product are different. Even in coding you can see it with tab completion/agents v vibe coding. It's a spectrum and people are trying to find their personal divider on it. Additionally there are those out there that decry anything involving AI as heresy. (no thinking machines!)

> Additionally there are those out there that decry anything involving AI as heresy. (no thinking machines!)

I don’t think anyone decrying the current crop of “AI” is against “thinking machines”. We’re not there yet, LLMs don’t think, despite the marketing.


This is exactly the sort of refusal to comprehend so that you can get in an "um, ackshually" that the op is talking about. He's quoting a line from a book as a metaphor for a concept the book illustrates well.

You see someone who you think has missed a larger point, and all you can muster as a reply is a vague jab and unexplained reference? Do you not see the irony? Your whole comment is an “um, ackshually”, the very thing you are decrying.

I didn’t enjoy Dune, by the way. No shade on those who did, of course, but I couldn’t bring myself to finish it.

If you think there’s something there, explain your point. Make an argument. Maybe I have misunderstood something and will correct my thinking, or maybe you have misunderstood and will correct yours. But as it is, I don’t see your comment as providing any value to the discussion. It’s the equivalent of a hit and run, meant to insult the other person while remaining uncommitted enough to shield yourself from criticism.


This guy gets it.

Yes, latexr managed to somehow sidestep the point entirely and make a pedantic correction. I notice this a lot in these discussions.

The point is AI has lots of useful applications, even though there's also lots of detestable ones.


True. It's more like 'no creative machines' or 'no entry level middle class job machines'.

And they don’t reason. They do prompt smoothing.

LLMs don't think, and planes don't fly.

LLMs think in the same way submarines swim.

So... they do think? Or what is your position? Submarines obviously do swim, otherwise they'd either float or sink.

It's an old saying. The ability for submarines to move through water has nothing to do with swimming, and AIs ability to do generate content has nothing to do with thinking.

Uh, kind of the opposite :D

The quote (from Dijkstra) is that asking whether machines think is as uninteresting as asking whether submarines swim. He's not saying machines don't think, he's saying it's a pointless thing to argue about - an opinion about whether AIs think is an opinion about word usage, not about AIs.


Yes, submarines swim just like how people sail the breaststroke.

Are you hitting tab because it’s what you were about to type, or did it “generate” something you don’t understand? Seems a personalized distinguisher to me.

Until your spouse/SO/sister/mother/girlfriend spurns a LEO, and then the LEO uses it to stalk and harass them. Talk to any LEO, they constantly misuse their data access to look up friends/family/neighbors to find dirt. Most of the time its relatively harmless gossip, but it can easily be used to harass people.

These tools tend to be very expensive in my experience unless you are running your own monitoring cloud. Either you end up sampling traces at low rates to save on costs, or your observability bill is more than your infrastructure bill.

We self host Grafana Tempo and whilst the cost isn’t negligible (at 50k spans per second), the money saved in developer time when debugging an error, compared to having to sift through and connect logs, is easily an order of magnitude higher.

Doing stuff like turning on tracing for clients that saw errors in the last 2 minutes, or for requests that were retried should only gather a small portion of your data. Maybe you can include other sessions/requests at random if you want to have a baseline to compare against.

Try open-source databases specially designed for traces, such as Grafana Tempo or VictoriaTraces. They can handle the data ingestion rate of hundreds of thousands trace spans per second on a regular laptop.

I like to write them on my own in every company Im in using bash. So I have a local set of bash commands to help me figure out logs and colorize the items I want to.

Takes some time and its a pain in the ass initially, but once I've matured them - work becomes so much more easy. Reduces dependability on other people / teams / access as well.

Edit: Thinking about this, they wont work in other use cases. Im a data engineer so my jobs are mostly sequential.


I've tried HyperDX and SigNoz, they seem easy to self-host and decent enough

I have seen pushback on this kind of behavior because "users don't like error codes" or other such nonsense. UX and Product like to pretend nothing will ever break, and when it does they want some funny little image, not useful output.

A good compromise is to log whenever a user would see the error code, and treat those events with very high priority.


We put the error code behind a kind of message/dialog that invites the user to contact us if the problem persists and then report that code.

It’s my long standing wish to be able to link traces/errors automatically to callers when they call the helpdesk. We have all the required information. It’s just that the helpdesk has actually very little use for this level of detail. So they can only attach it to the ticket so that actual application teams don’t have to search for it.


> I have seen pushback on this kind of behavior because "users don't like error codes" or other such nonsense […]

There are two dimensions to it: UX and security.

Displaying excessive technical information on an end-user interface will complicate support and likely reveal too much about the internal system design, making it vulnerable to external attacks.

The latter is particularly concerning for any design facing the public internet. A frequently recommended approach is exception shielding. It involves logging two messages upon encountering a problem: a nondescript user-facing message (potentially including a reference ID pinpointing the problem in space and time) and a detailed internal message with the problem’s details and context for L3 support / engineering.


Sorry for the OT response, I was curious about this comment[0] you made a while back. How did you measure memory transfer speed?

[0] https://news.ycombinator.com/item?id=38820893


I used «powermetrics» bundled with macOS with «bandwidth» as one of the samplers (--samplers / -s set to «cpu_power,gpu_power,thermal,bandwidth»).

Unfortunately, Apple has taken out the «bandwidth» sampler from «powermetrics», and it is no longer possible to measure the memory bandwidth as easily.


> UX and Product like to pretend nothing will ever break, and when it does they want some funny little image, not useful output.

Just ignore them or provide appeasement insofar that it doesn’t mess with your ability to maintain the system.

  (cat picture or something)
  
  Oh no, something went wrong.
  
  Please don’t hesitate to reach out to our support: (details)
  This code will better help us understand what happened: (request or trace ID)

Nah, that’s easy problem to solve with UX copy. „Something went wrong. Try again or contact support. Your support request number is XXXX XXXX“ (base 58 version of UUID).

I used to share a similar sentiment about speed, especially after having burned out hard around 30. But after recovering, I think I may have overcorrected. Momentum is very powerful, and it's hard to gain momentum at low speed.

Speed is important but going fast doesn't mean going as fast as possible. It's about going fast sustainably. Work speed isn't binary. You can be fast without being the fastest.


The speed that's between "slow" and "fast" is called normal, and far too many companies, people and leaders deeply believe normal counts as slow.


You are getting downvoted, but I have heard Japan has surprisingly low literacy rates (well below the 99% stated by the government) for just this reason.


I am not sure about the literacy rates, but I live in Japan and pretty much every single Japanese person I have ever talked to has told me how painful kanji are and how they wished the Japanese writing system was easier.

In comparison, my mother language is Spanish, a language with very simple spelling rules. My girlfriend is always surprised how she can read out loud a random Spanish text and even though she doesn't understand it, I will understand her easily (it also helps that both languages have very similar sounds).


How would you solve the homonym problem without a kanji like character set? I am sure it's possible but that would be a big challenge.

(For the reader, Japanese has a lot of homonyms since it has a comparatively limited set of phonemes. Specifically a problem in writing due to lack of context, spaces and lack of tonality that can help disambiguate the language when spoken)


As a native Japanese speaker, I find this homonyms concern kind of odd. It’s like asking how Japanese people can speak to each other and understand one another given all the homonyms -- the assumption being that speech alone clearly isn’t enough without written materials with kanji to aid their comprehension.

The obvious way people handle it in speech is by picking words that are clearer in context when homonyms might cause confusion. If you consume any Japanese video content on YouTube etc, it’s very common for speakers to say a homonym, instantly notice the ambiguity, and restate it using a clearer word or brief explanation, which they could, at least in theory, do in no- or low-kanji writing too.

同音異義語の区別に不可欠な漢字の廃止は不可能か?(Is abolishing kanji -- which is essential for distinguishing homonyms -- impossible?)

https://www.kanamozi.org/hikari932-0704.html


The biggest source of homonyms are words imported from Chinese, as Chinese morphemes are usually monosyllabic. It is already a problem in Chinese due to the limited phonotactics, made even worse in Japanese.

So the most obvious solution would be to drop on'yomi (Chinese readings) and go to pure kun'yomi (Japanese readings) whenever possible. My understanding is that such a strategy was used by the Koreans to replace Hanja with Hangul.

Now, I understand that it would be a massive undertaking and extremely unlikely to ever happen, and honestly it's not really my problem, so I am just speculating here xD


Japan has an extremely high literacy rate.


It's easier to throw yourself into a programming project as a programmer than learn completely new skills: art, design, music. Instead the fantasy is that either the game engine is so great people will come make games, or the game engine will support something so radically different the programmer art gets ignored (see simulation games like Minecraft or Factorio). I'm convinced that's why there are so many engines with no games.


I always thought menus had icons so they could be matched to the same functionality on the toolbar. If a menu lacks an icon, then it's probably not on the toolbar. This falls apart when there is no toolbar. But I have definitely found an action in the menu, looked at the icon, and matched it to a a button elsewhere.


I believe Microsoft Office 97 for Windows was the first time I saw icons next to menu items. Office 97 had highly customizable menus and toolbars. Each menu item and toolbar item could be thought of as an action with an icon and a label, and that action could be placed in either a menu or a toolbar. Not every menu item had an icon associated with it. Additionally, each icon was colored and was clearly distinct.


Office 97 went pretty overboard on customization. It could be awesome if you know what you're doing, but I saw countless examples of where somebody had accidentally changed something and got stuck. Deleted the file menu? tough luck!


This is definitely where I would this pattern - MS Office 97’s customizable toolbars necessitated this model where every single thing you could do in the application had an icon.

It then got copied into Visual Studio, where making all of the thousands of things you could do and put into custom toolbars or menus have visually meaningful icons was clearly an impossible task, but it didn’t stop Microsoft trying.

I assume Adobe, with their toolbar-centric application suite, participated in the same UI cycle.

By the time of Office 2007 Microsoft were backing off the completely customizable toolbar model with their new ‘Ribbon’ model, which was icon-heavy, but much more deliberately so.


I still regard Office '97 as the best UI it ever had. I spent a lot of time inside it, including a couple of years at a bank reconciling corporate actions before I got my first programming job. The ribbon version was awful in comparison.


2003 was the best/final iteration of it - I still miss old excel

new excel is just garbage instead in virtually every way


Yeah, after that they started nuking VBA too. Sad times!


But…LAMBDA()! And LET() and friends.

Also, the Excel Labs formula editor. But it needs a way to tell it "I know I have too many cells! Just let me trace over the 100 nearest rows."

The old scripting language can still be handy if you can keep people from opening the online version of Excel. Especially if you have a certain debugger addin[1]. Excel's JavaScript features are of limited use, if you're offline.

I keep wishing for a spreadsheet to implement all its scripting and formulas in something like Forth behind the scenes, so that every time a competitor announces n-more functions, we can just be like "Oh, really?" and add it.

[1] Related to waterfowl of the plasticised yellow variety. I'm not sure I can mention the name in a post anymore, since ages ago when I tried multiple times to post a properly-referenced (overly-hyperlinked?) message while my connection was very flaky. Note to self: should probably mail dang about this, some day.


I believe some programs used to let you even drag menu items to the toolbar.


Many KDE apps (Dolphin, Kate, Okular, etc.) let you configure their tool bars (or get rid of them entirely) and set them to show just icons, text, or both (with the text to the side or below). It's the kind of thing most people won't bother with, but for frequently used applications it's nice to be able to customize it to suit your needs. It's done via a config option though, not by dragging menu items to the toolbar (which strikes me as something you could initiate by mistake).


MS Office’s fully customisable toolbars, complete with built-in icon editor.

…ripped out when the Office Ribbon was introduced in 2007; the now-limited customisation is now considered an improvement because of the IT support problems caused by users messing up their own toolbars.

I mean, yes; but that’s what Group Policy is for! And the removal of the icon editor is just being downright mean to bored school kids.


You made me feel old by saying "I believe".


I think this is common sentiment right now. Although if we extend the analogy we arrive at towers replacing mounds. People used to spend generation piling dirt and rocks to reach just a fraction of what steel can reach in seasons. No one bemoans the loss of the noble dirt carrying profession.


Because the web was made to render documents, but users want apps. CSS in part is so confusing because its original incarnations pulled heavily from traditional print media layout terms.

Everything since then was an attempt to leverage JS to turn documents into applications. Why? Ask any user.


I don't think blaming this mess on users makes much sense.

Smartphones on the other hand...


Uh, I certainly don't want apps. The Web is a terrible app platform, native is so much better in every case. Just documents, please.


Written from a web app


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: