Hacker Newsnew | past | comments | ask | show | jobs | submit | clearleaf's commentslogin

This is such an indictment of modern technology. No offense is meant to you for doing what works for you, but it is buck wild that this is the "fix" they've come up with. As somebody learning about this for the first time it sounds equivalent to a world where screenshotting became really hard so people started taking photos of their screen so they could screenshot the photo. How could such a fundamental aspect of using a computer become so ridiculous? It's like satire.


Unfortunately, some apps don't support text selection and on some websites the text selection is unpredictable.

I'd actually compare screen OCR to screenshots. Instead of every app and every website implementing their own screenshot functionality, the system provides one for you.

Same goes for text selection. Instead of every context having to agree on tagging the text and directions, your phone has a quick way of letting you scan the screen for text.

To be fair, I still use the "hold the text to select it" approach when I want to continue with the "select all" action and have some confidence that is going to do what I want.


> some apps don't support text selection and on some websites the text selection is unpredictable.

That correctly identifies the problem. Now why is that, and how can we fix it?

It seems fixable; native GUI apps have COM bindings that can fairly reliably produce the text present in certain controls in the vast majority of cases. Web apps (and "desktop" apps that are actually web apps) have accessibility attributes and at least nominally the notion of separating document data from presentation. Now why do so few applications support text extraction via those channels? If the answer is "it's hard/easier not to", how can we make the right way easier than the wrong way?


I don't see the point of publishing any AI generated content. If I want AIs opinion on something I can ask it. If I want an AI image I can generate it. I've never found it helpful to have someone else's ai output lying around.


My brain first started doing this with online ads as well.

The habit has adapted and evolved very strongly with the amount of exercise it gets from UIs, textbooks, signage, and basically every other visual medium possible these days. It has actually become a problem with how often I overlook important information due to it being situated in a "nothing useful will ever be here" zone. But it's difficult to consciously control that instinct when it's correct 99.999% of the time.


For me it just doesn't work at all. I don't know why but every windows instance I've used since Win7 has not been able to find files even with the exact filename supplied. I don't disable the indexer. I can see it using CPU and disk resources but it just doesn't find anything relevant when I search. When I instead use Search Everything on Windows it works perfectly.


I think that due to how sophisticated anti-bot measures have gotten, bots now go through a "life cycle." An engagement bot spends the first phase of it's life building up an innocent and legitimate looking history. It does this by blending into the noise with innocuous and pointless comments that you'd never take a second glance at, and definitely not report or flag. Then when the account has aged and is in good enough standing, metamorphosis to the adult stage occurs, and the bot starts posting the kind of blatant spam that you'd think would be automatically ban filtered, but somehow isn't. These bot farmers are quite literally farming bots like vegetables and selling them when they've ripened.

I am very confident that this is the case in YouTube comments, where most people find they can not use violent words like "kill" or "genocide" when discussing war, but somehow there are bots posting uncensored racial slurs.


I stopped using apps like this because they were always getting broken by youtube. Obviously it's intentional sabotage but still. It felt like I had to update those apps every time I used them and sometimes there was no update at that time at all. The mobile site never breaks and you have full access to extensions if you use firefox.


Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)

Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.


> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.


“Show me the incentive and I’ll show you the outcome” - Charlie Munger

Kagi works better and will continue to do so as long as Kagi’s interests are aligned with users’ needs and Google’s aren’t.


>Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

Unlikely. There are very few people willing to pay for Kagi. The HN audience is not at all representative of the overall population.

Google can have really miserable search results and people will still use it. It's not enough to be as good as google, you have to be 30% better than google and still free in order to convert users.

I use Kagi and it's one of the few services I am OK with a reoccurring charge from because I trust the brand for whatever reason. Until they find a way to make it free, though, it can't replace google.


They are transparent about their growth of paying customers, do you feel as if this fairly consistent and linear rate of growth will never be enough to be meaningful?

https://kagi.com/stats


> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

They spent the last decade and a half encouraging the proliferation of garbage via "SEO". I don't see this reversing.


Google wants there to be garbage in search results, because most of the garbage sites are full of Google ads.


There are several startups providing web search solely for ai agents. Not sure any agent uses Google for this.


Maybe we should learn to pass reverse-turing tests and pretend to be LLMs so we can use this stuff lol.


A lot of these HR departments are in serious need of an investigation. If they've really determined for themselves that nobody is good enough for this job I guarantee there is some kind of discrimination or fraud going on.


Where I come from this was a widely held belief by the end of the 2000's: If you raise a child in an overly sterile environment and/or feed them a very limited diet, they are much more likely to develop a bad immune system and allergies. It was also believed that this idea came from science, but I guess not?

Here's an early preview for the next bombshell of this area. Breastfeeding is extremely beneficial. "Infant formula" should not be the main thing a baby is consuming.

To me it discredits science a lot more when things like this are treated as arcane or brand new knowledge. It's good when we can lock in reasoned beliefs as definite fact, instead of just reasoning which is often incomplete or flat out wrong. But when it's right and people act like this about it, it just makes it look like "scientists" know less about the world than my grandma, and that my grandma would make better calls on national health policy than the people currently in charge. Obviously that's not the case but I wouldn't be unjustified in thinking that during times like this.


That's hardly a bombshell since it's common knowledge.

Baby formula ads in the UK are even required to say that "breast is best" type language. I assume it's similar in most countries.


It is important to memorialize and standardize "common knowledge", as without memorialization, knowledge drift can cause a loss of the "commonality" intrinsic to the knowledge.


If we had a grid of cells where each cell was a number from 0-8, representing the number of neighbours, would that be equivalent to what these "links" are? I'm still finding it hard to understand.


links are first-class entities. Some rules have cell states AND links. Some rules treat cell states as topological metrics of neighborhoods or neighbors. See the detailed .md cited above for more details.


This is interesting and may reveal additional properties of certain class of CAs.

Yet, as some comments already stated what you do is basically study a subclass of multi-state 2D CAs where specific states from the finite state set have a specific meaning associated.

In general a CA is defined as a dynamical system governed by a local rule operating on the neighborhood configuration and yielding a new state. State set is typically finite. But the actual structure of the states can be anything you like. A valid state can be a tuple of a form (visible state, number of neighbors, sum of neighbors degrees, …). As the maximum neighborhood size is finite and the visible cell states are finite - there is a finite number of such tuples which constitute the state set on which a CA can operate.

Summing up - you are studying CAs in which your multi-state setup has some implied meaning. Still cool and interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: