Hacker Newsnew | past | comments | ask | show | jobs | submit | gnfargbl's commentslogin

People often say that the problem with string theory is that it doesn't make any prediction, but that's not quite right: the problem is that it can make almost any prediction you want it to make. It is really less of a "theory" in its own right and more of a mathematical framework for constructing theories.

One day some unusual observation will come along from somewhere, and that will be the loose end that allows someone to start pulling at the whole ball of yarn. Will this happen in our lifetimes? Unlikely, I think.


The problem is that once, a long time ago, String Theory was something that made concrete predictions that people just couldn't calculate.

Then people managed to calculate those predictions, and they were wrong. So the people working that theory up relaxed some constraints and tried again, and again, and again. So today it's that framework that you can use to write any theory you want.

That original theory was a good theory. Very compelling and just a small adjustment away from mainstream physics. The current framework is just not a good framework, it's incredibly hard to write any theory in it, understand what somebody else created, and calculate the predictions of the theories you create.


I am old enough to remember when string theory was expected to explain and unify all forces and predict everything. Sadly, it failed to deliver on that promise.

And there is no known single real world experiment that can rule out string theory while keeping general relativity and quantum mechanics intact.

More accurately, string theory is not wrong (because it just cannot be wrong). Because it does not predict anything and cannot invalidate anything, it does not help to advance our understanding of how to integrate general relativity and quantum mechanics.

It should not be called theory - maybe set of mathematical tools or whatever.


You can't really show it's wrong because there are dozens of different theories but using the Wikipedia definition "point-like particles of particle physics are replaced by one-dimensional objects called strings" it's possible that particles are not strings. I guess it would then be like fairies at the end of the garden theory. Good from a literary fiction point of view but not reality.

string boot framework

I was planning to make a similar comment. Conjecturing that some theory in the string theory landscape [0] gives a theory of quantum gravity consistent with experiments that are possible but beyond what humans may ever be capable of isn't as strong of a claim as it may first appear. The intuition I used to have was that string theory is making ridiculously specific claims about things that may remain always unobservable to humans. But the idea is not that experiments of unimaginable scale and complexity might reveal that the universe is made up of strings or something, it's just that it may turn out that string theory makes up such a rich and flexible family of theories that it could be tuned to the observed physics of some unimaginably advanced civilization. My impression is that string theory is not so flexible that its uninteresting though. There's some interesting theoretical work along these lines around exploring the swampland [1].

[0] https://en.wikipedia.org/wiki/String_theory_landscape

[1] https://en.wikipedia.org/wiki/Swampland_(physics)


Or, that day will never come, because string theory isn't reflective of the actual world, or because there are so many theories possible under the string theory rubric that we can never find the right one, or because the energies involved to see any effect are far beyond what could be reached in experiment.

It isn't completely implausible that a future civilisation could perform the experiments to gather that data, somehow; but it is hard to envisage how we do it here on Earth.

Your implicit point is a good one. Is it sensible to have a huge chunk of the entire theoretical physics community working endlessly on a theory that could well end up being basically useless? Probably not.


There is not a "huge chunk" of the theoretical physics community working on string theory, and their never was. For one, it is far less common a topic of research now then it was earlier when it was more popular, but even then "huge" was really "a lot of universities had a grant for string theory investigation because it looked promising".

It mostly hasn't worked out and now people are moving on to other things.

The single worst thing that happened though was the populism: a small group of people with credentials started putting out pop-sci books and doing interviews, well in excess of what their accomplishments should mean. People are like "so many people are working on this" because there were like, 3 to 5 guys who always said "yes" to an interview and talked authoritatively.


Huge is a subjective term, but go and count the number of participants at Strings 2025 [1]. Then realise that is just one of many conferences [2]. It's still a very big field.

[1] https://nyuad.shorthandstories.com/strings-conference-abu-dh...

[2] https://www.stringwiki.org/wiki/Conferences


A meaningless statement if you aren't going to introduce any points of comparison. But I would hardly call 735 conference participants a huge conference. Like, that's big but there are lot more then 735 theoretical physicists.

Claude tells me that there are about ~5000 theoretical high energy physicists actively publishing as tracked by INSPIRE-HEP (the de facto search engine in that field). If we estimate that about a third or half of string theorists take part in Strings in a given year -- because there are other big conferences like String Pheno that will be more relevant for many -- then we have something like 30-50% of high energy theorists working on string theory.

I agree that people should be "moving on to other things," but I'm not seeing the evidence that they actually are.


Are all the attendees of a Linux conference Linux developers? Are all the people who attend CCC penetration testers?

> the problem is that it can make almost any prediction you want it to make

In logic this is either the principle of "contradiction elimination" or a "vacuous truth". Depending on how you look at it. i.e. given sufficiently bad premises, you can prove anything.


> less of a "theory" in its own right and more of a mathematical framework for constructing theories.

so it's javascript?


A bit like LISP then ...

Theorists are real good at bending around experimental data, unusual or not

Both you and the poster above you may be misunderstanding the point that Jonathan Hall KC appears to be making. If you take a look at what he actually writes [1], then it is pretty clear that he is presenting these hypothetical cases as examples of obvious over-reach.

This is a warning from the independent reviewer that the law is too potentially broad, not an argument to retain these powers.

[1] https://assets.publishing.service.gov.uk/media/69411a3eadb57..., pages 112 and 113


So: OP wants to grow, but at his own pace and in his own way. He values transparency and autonomy. He doesn't mention salary as being particularly important, but does want a good work/life balance.

I wonder if he's considered a job as a developer in the Dutch government?


Be aware of threat actors, too: you're giving them an easy data exfil route without the hassle and risk of them having to set up their own infrastructure.

Back in the day you could have stood up something like this and worried about abuse later. Unfortunately, now, a decent proportion early users of services like this do tend to be those looking to misuse it.


What's a "data exfil route"?


I'm not who you asked, but essentially, when you write malware that infects someone's PC, that in itself doesn't really help you much. You usually want to get out passwords and other data that you might have stolen.

This is where an exfil (exfiltration) route is needed. You could just send the data to a server you own, but you have to make sure that there are fallbacks once that one gets taken down. You also need to ensure that your exfiltration won't be noticed by a firewall and blocked.

Hosting a server locally, easily, on the infected PC, that can expose data under a specific address is (to my understanding) the holy grail of exfiltration; you just connect to it and it gives you the data, instead of having to worry much about hosting your own infrastructure.


Thanks!

Though the public address is going to be random here so how will the hacker figure out which tunnl.gg subdomain to gobble up?


That's actually a fair defence against this kind of abuse. If the attacker has to get some information (the tunnel ID) out of the victim's machine before they can abuse this service, then it is less useful to them because getting the tunnel ID out is about as hard as just getting the actual data out.

However, if "No signup required for random subdomains" implies that stable subdomains can be obtained with a signup, then the bad guys are just going to sign up.


I've seen lots of weird tricks malware authors use, people are creative. My favorite is that they'd load up a text file with a modified base64 table from Dropbox which points to the URL to exfiltrate to. When you report it to Dropbox, they typically ignore the report because it just seems like random nonsense instead of being actually malicious.


> Hosting a server locally, easily, on the infected PC, that can expose data under a specific address is (to my understanding) the holy grail of exfiltration; you just connect to it and it gives you the data, instead of having to worry much about hosting your own infrastructure.

A permanent SSH connection is not exactly discreet, though...


The real kicker is in point 1.13:

> website activity logs show the earliest request on the server for the URL https://obr.uk/docs/dlm_uploads/OBR_Economic_and_fiscal_outl.... This request was unsuccessful, as the document had not been uploaded yet. Between this time and 11:30, a total of 44 unsuccessful requests to this URL were made from seven unique IP addresses.

In other words, someone was guessing the correct staging URL before the OBR had even uploaded the file to the staging area. This suggests that the downloader knew that the OBR was going to make this mistake, and they were polling the server waiting for the file to appear.

The report acknowledges this at 2.11:

> In the course of reviewing last week’s events, it has become clear that the OBR publication process was essentially technically unchanged from EFOs in the recent past. This gives rise to the question as to whether the problem was a pre-existing one that had gone unnoticed.


> In other words, someone was guessing the correct staging URL before the OBR had even uploaded the file to the staging area. This suggests that the downloader knew that the OBR was going to make this mistake, and they were polling the server waiting for the file to appear.

The URLS are predictable. Hedge-funds would want to get the file as soon as it would be available - I imagine someone set up a cron-job to try the URL every few minutes.


I used to do this for BOE / Fed minutes, company earnings etc on the off chance they published it before the official release time.

2025-Q1-earnings.pdf - smash it every 5 seconds - rarely worked out, generally a few seconds head start at best. By the time you pull up the pdf and parse the number from it the number was on the wires anyway. Very occasionally you get a better result however.


This is so incompetent.

Given the market significance of the report it's damn obvious that this would happen. They should have assumed that security via obscurity was simply not enough, and the OBR should have been taking active steps to ensure the data was only available at the correct time.

> Hedge-funds would want to get the file as soon as it would be available - I imagine someone set up a cron-job to try the URL every few minutes.

It's not even just hedge-funds that do this. This is something individual traders do frequently. This practise is common place because a small edge like this with the right strategy is all you need to make serious profits.


They weren't in any way attempting to rely on security by obscurity.

They didn't assume nobody would guess the URL.

They did take active steps to ensure the data was only available at the correct time.

But they didn't check that their access control was working, and it wasn't.


This setup was not initially approved, see 1.7 in the document:

> 1.7 Unlike all other IT systems and services, the OBR’s website is locally managed and outside the gov.uk network. This is the result of an exemption granted by the Cabinet Office in 2013. After initially rejecting an exemption request, the Cabinet Office judged that the OBR should be granted an exemption from gov.uk in order to meet the requirements of the Budget Responsibility and National Audit Act. The case for exemption that the OBR made at the time centred on the need for both real and perceived independence from the Treasury in the production and delivery of forecasts and other analysis, in particular in relation to the need to publish information at the right time.


Gov.uk does not use some random wordpress plugin to protect information of national significance, doco at https://docs.publishing.service.gov.uk/repos/whitehall/asset...


Part of this is a product of the UK's political culture where expenses for stuff like this are ruthlessly scrutinised from within and without.

The idea of the site hosting such an important document running independently on WordPress, being maintained by a single external developer and a tiny in-house team would seem really strange to many other countries.

Everyone is so terrified of headlines like "OBR spends £2m upgrading website" that you get stuff like this.


It's not an easy call. Sometimes, one or two dedicated and competent people can vastly outperform large and bureaucratic consulting firms, for a fraction of the price. And sometimes, somebody's cousin "who knows that internet stuff" is trousering inflated rates at the taxpayer's expense, while credentialed and competent professionals are shut out from old boys' networks. One rule does not fit all.


It would work if old boys' networks were not the de facto pool that the establishment hired from. The one time where UK GOV did go out and hire the best of the best in the private sector regardless of what Uni they went to we got GDS and it worked very well, but it seems like an exception to usual practice.


> This suggests that the downloader knew that the OBR was going to make this mistake, and they were polling the server waiting for the file to appear.

I think most of the tech world heard about the Nobel Peace Prize award so it doesn't seem that suspicious to me that somebody would just poll urls.

Especially since before the peace prize there have been issues with people polling US economic data.

My point is strictly, knowledge that they should poll a url is not evidence of insider activity.


How does the Nobel Peace Prize figure into this? I seem to be on the other side that didn't hear about the award. Which is not surprising as I don't follow it, but also I haven't worked out query terms to connect it with OBR.


Somebody monitored the metadata on files to figure out who the winner of the nobel prize was prior to the official announcements by the candidate that was modified. Which they used to financially profit in betting markets.

It relates to OBR because it's another scenario where people just by polling the site can figure out information that wasn't supposed to be released yet. And then use that information to profit.

Since a recent event of polling was in the news the idea of polling isn't really evidence of an insider trying to leak data versus somebody just cargo-culting a technique. Plus polling of financial data was already common.


Thank you for answering that person’s question so clearly. I was also in the dark and this really helped.


Because it was insider traded on Polymarket many hours before it was publicly announced.


The report also says a previous report was also accessed 30 mins early.


Could this be a problem not with AI, but with our understanding of how modern economies work?

The assumption here is that employees are already tuned so be efficient, so if you help them complete tasks more quickly then productivity improves. A slightly cynical alternate hypothesis could be that employees are generally already massively over-provisioned, because an individual leader's organisational power is proportional to the number of people working under them.

If most workers are already spending most of their time doing busy-work to pad the day, then reducing the amount of time spent on actual work won't change the overall output levels.


You describe the "fake email jobs" theory of employment. Given that there are way fewer email jobs in China does this imply that China will benefit more from AI? I think it might.


Are there fewer busy-work jobs in China? If so, why? It's an interesting assertion, but human nature tends to be universal.


It could be a side effect of China pursuing more markets, having more industry, and not financializing/profit-optimizing everything. Their economy isn't universally better but in a broad sense they seem more focused on tangible material results, less on rent-seeking.


Could argue there are more. Lots of loss making SOEs in China.


less money, less adult daycare


As China’s population gets older and more middle class is this shifting to be more like America?

I really don’t know and am curious.


This is a part of it indeed. Most people (and even a significant number of economists) assume that the economy is somehow supply-limited (and it doesn't help that most 101 econ class will introduce the markets as a way of managing scarcity), but in reality demand is the limit in 90-ish% of the case.

And when it's not, the supply generally don't increase as much as it could, became supplier expect to be demand-limited again at some point and don't want to invest in overcapacity.


Agreed. If you "create demand", it usually just means people are spending on the thing you provide, and consequently less on something else. Ultimately it goes back to a few basic needs, something like Maslow's hierarchy of needs.

And then there's followup needs, such as "if I need to get somewhere to have a social life, I have a need for transportation following from that". A long chain of such follow-up needs gives us agile consultants and what not, but one can usually follow it back to the source need by following the money.

Startup folks like to highlight how they "create value", they added something to the world that wasn't there before and they get to collect the cash for it.

But assuming that population growth will eventually stagnate, I find it hard to not ultimately see it all as a zero sum game. Limited people with limited time and money, that's limited demand. What companies ultimately do, is fight for each other for that. And when the winners emerge and the dust settles, supply can go down to meet the demand.


It's not a zero sum game. Think, an agronomist visits a farm, instructs to cut a certain plant for the animals to eat at a certain height instead of whenever, the plant then provides more food for the animals to eat exclusively due to that, no other input in the system, now the animals are cheaper to feed, so more profit to the farmer and cheaper food to people.

How would this be zero sum?


It would be if demand was limited. Let's assume the people already have enough food, and the population is not growing - that was my premise. Through innovation, one farmer can grow more than all the others.

Since there already was enough food, the market is saturated, so it would effectively reduce the price of all food. This would change the ratio so that the farmer who grows more gets more money in total, and every other farmer gets a bit less.

As long as there is any sort of growth involved - more people, more appetite, whatever, it would be value creation. But without growth, it's not.

At least not in the economical sense. Saving resources and effort that goes into producing things is great for society, on paper. But with the economic system that got us this far, we have no real mechanism for distributing the gains. So we get over supplying producers fighting over limited demand.

The world is several orders of magnitude more complex than that example, of course. But that's the basic idea.

That said, I'm not exactly an economist, and considering it's a bleak opinion to hold, I'd like to learn something based on which I could change it.


Late comment but if technology brought down the price of food then people could spend less on food, more on other good and services. Or the same on higher quality food. You don't need an increasing population for that. The improvement in agriculture could mean some farmers would have to find other work. So you can have economic growth with a stagnant or falling population. And you can rather easily have economic growth on a per-capita basis with no overall GDP growth, like is common in Japan today.

About the farmer needing to change jobs, in the interview that is the subject of this thread Ilya Sutskever speaks with wonder about humans' ability to generalize their intelligence across different domains with very little training. Cheaper food prices could mean people eat out or order-in more and then some ex-farmers might enter restaurant or food preparation businesses. People would still be getting wealthier, even without the tailwind of a growing population.


Who will eat the extra meat if the population has reached parity?


Varies depending on the field and company. Sounds like you may be speaking from your own experiences?

In medicine, we're already seeing productivity gains from AI charting leading to an expectation that providers will see more patients per hour.


> In medicine, we're already seeing productivity gains from AI charting leading to an expectation that providers will see more patients per hour.

And not, of course, an expectation of more minutes of contact per patient, which would be the better outcome optimization for both provider and patient. Gotta pump those numbers until everyone but the execs are an assembly line worker in activity and pay.


I don't think that more minutes of contact is better for anybody.

As a patient, I want to spend as little time with a doctor as possible and still receive maximally useful treatment.

As a doctor, I would want to extract maximal comp from insurance which I don't think is tied time spent with the patient, rather to a number of different treatments given.

Also please note that in most western world medical personnel is currently massively overprovisioned and so reducing their overall workload would likely lead to better result per treatment given.


> leading to an expectation that providers will see more patients per hour

> reducing their overall workload

what?


It is the delusion of the Homo Economicus religion.

I think the problem is a strong tie network of inefficiency that is so vast across economic activity that it will take a long time to erode and replace.

The reason it feels like it is moving slow is because of the delusion the economy is made up a network of Homo Economicus agents who would instantaneously adopt the efficiencies of automated intelligence.

As opposed to the actual network of human beings who care about their lives because of a finite existence who don't have much to gain from economic activity changing at that speed.

That is different though than the David Graeber argument. A fun thought experiment that goes way too far and has little to do with reality.


Let's invert that thinking. Imagine you're the "security area director" referenced. You know that DJB's starting point is assumed bad faith on your part, and that because of that starting point DJB appears bound in all cases to assume that you're a malicious liar.

Given that starting point, you believe that anything other than complete capitulation to DJB is going to be rejected. How are you supposed to negotiate with DJB? Should you try?


To start with, you could not lie about what the results were.


Your response focuses entirely on the people involved, rather than the substance of the concerns raised by one party and upheld by 6 others. I don't care if 1 of the 7 parties regularly drives busloads of orphans off a cliff, if the concerns have merit, they must be addressed. The job of the director is to capitulate to truth, no matter who voices it.

Any personal insults one of the parties lobs at others can be addressed separately from the concerns. An official must perform their duties without bias, even concerning somebody who thinks them the worst person in the world, and makes it known.

tl;dr: sometimes the rude, loud, angry constituent at the town hall meeting is right


I'm a huge Go proponent but I don't know if I can see much about Go's module system which would really prevent supply-chain attacks in practice. The Go maintainers point [1] at the strong dependency pinning approach, the sumdb system and the module proxy as mitigations, and yes, those are good. However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.

The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.

This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?

[1] https://go.dev/blog/supply-chain


A big thing is that Go does not install the latest version of transitive dependencies. Instead it uses Minimal version selection (MVS), see https://go.dev/ref/mod#minimal-version-selection. I highly recommend reading the article by Russ Cox mentioned in the ref. This greatly decreases your chances of being hit by malware released after a package is taken over.

In Go, access to the os and exec require certain imports, imports that must occur at the beginning of the file, this helps when scanning for malicious code. Compare this JavaScript where one could require("child_process") or import() at any time.

Personally, I started to vendor my dependencies using go mod vendor and diff after dependency updates. In the end, you are responsible for the effect of your dependencies.


In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)

I don’t believe I can do the same with Rust.


You absolutely can, both systems are practically identical in this respect.

> In Go you know exactly what code you’re building thanks to gosum

Cargo.lock

> just create vendor dirs before and after updating packages and diff them [...] I don’t believe I can do the same with Rust.

cargo vendor


cargo vendor


The Go standard library is a lot more comprehensive and usable than Node, so you need less dependencies to begin with.


Aside from other security features already mentioned Go also doesn't execute code at compile time by design.

There is no airtight technical solution, for any language, for preventing malicious dependencies making it into your application. You can have manual or automated review using heuristics but things will still slip through. Malicious code doesn't necessarily look obvious, like decoding some base64 and piping it into bash, it can be an extremely subtle vulnerability sprinkled in that nobody will find until it's too late.

RE dependency cooldowns I'm hoping Go will get support for this. There's a project called Athens for running your own Go module proxy - maybe it could be implemented there.


> However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.

You can't, really, aside from full on code audits. By definition, if you trust a maintainer and they get compromised, you get compromised too.

Requiring GPG signing of releases (even by just git commit signing) would help but that's more work for people to distribute their stuff, and inevitably someone will make insecure but convenient way to automate that away from the developer


If I understand TFA then the defendant is arguing that his message about owning a gun was made less glib by the verbatim inclusion of a tears-of-joy emoji plus a smiling-devil-horns emoji at the end.

That is... an unusual argument to make.


The recent Azure DDoS used 500k botnet IPs. These will have been widely distributed across subnets and countries, so your blocking approach would not have been an effective mitigation.

Identifying and dynamically blocking the 500k offending IPs would certainly be possible technically -- 500k /32s is not a hard filtering problem -- but I seriously question the operational ability of internet providers to perform such granular blocking in real-time against dynamic targets.

I also have concerns that automated blocking protocols would be widely abused by bad actors who are able to engineer their way into the network at a carrier level (i.e. certain governments).


> 500k /32s is not a hard filtering problem

Is this really true? What device in the network are you loading that filter into? Is it even capable of handling the packet throughput of that many clients while also handling such a large block list?


But this is not one subnet. It is a large number of IPs distributed across a bunch of providers, and handled possibly by dozens if not hundreds of routers along the way. Each of these routers won't have trouble blocking a dozen or two IPs that would be currently involved in a DDoS attack.

But this would require a service like DNSBL / RBL which email providers use. Mutually trusting big players would exchange lists of IPs currently involved in DDoS attacks, and block them way downstream in their networks, a few hops from the originating machines. They could even notify the affected customers.

But this would require a lot of work to build, and a serious amount of care to operate correctly and efficiently. ISPs don't seem to have a monetary incentive to do that.


It also completely overlooks the fact that some of the traffic has spoofed source IP addresses and a bad actor could use automated black holing to knock a legitimate site offline.


> a bad actor could use automated black holing to knock a legitimate site offline.

No, in my concept the host can only manage the traffic targeted at it and not at other hosts.


That already exists… that's part of cloudflare and other vendors mitigation strategy. There’s absolutely no chance ISPs are going to extend that functionality to random individuals on the internet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: