Hacker Newsnew | past | comments | ask | show | jobs | submit | Flashtoo's commentslogin

The notion that the brain uses less energy than an incandescent lightbulb and can store less data than YouTube does not mean we have had the compute and data needed to make AGI "for a very long time".

The human brain is not a 20-watt computer ("100 watts per day" is not right) that learns from scratch on 2 petabytes of data. State manipulations performed in the brain can be more efficient than what we do in silicon. More importantly, its internal workings are the result of billions of years of evolution, and continue to change over the course of our lives. The learning a human does over its lifetime is assisted greatly by the reality of the physical body and the ability to interact with the real world to the extent that our body allows. Even then, we do not learn from scratch. We go through a curriculum that has been refined over millennia, building on knowledge and skills that were cultivated by our ancestors.

An upper bound of compute needed to develop AGI that we can take from the human brain is not 20 watts and 2 petabytes of data, it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.


> it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.

You have great points there and I agree. Only issue I take with your remark above. Surely, by your own definition, this is not true. Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.

Also, re "The human brain is not a 20-watt computer ("100 watts per day" is not right)", I was merely saying that there exist an intelligence that consumes 20 watts per day. So it is possible to run an intelligence on that much energy per day. This and the compute bit do not refer to the training costs but to the running costs after all, it will be useless to hit AGI if we do not have enough energy or compute to run it for longer than half a millisecond or the means to increase the running time.

Obviously, the path to design and train AGI is going to take much more than that just like the human brain did but given that the path to the emergence of the human brain wasn't the most efficient given the inherent randomness in evolution natural selection there is no need to pretend that all the circumstances around the development of the human brain apply to us as our process isn't random at all nor is it parallel at a global scale.


> Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.

That's why I say that is an upper bound - we know that it _has_ happened under those circumstances, so the minimum time needed is not more than that. If we reran the simulation it could indeed very well be much faster.

I agree that 20 watts can be enough to support intelligence and if we can figure out how to get there, it will take us much less time than a billion years. I also think that on the compute side for developing the AGI we should count all the PhD brains churning away at it right now :)


"watts per day" is just not a sensible metric. watts already has the time component built in. 20 watts is a rate of energy usage over time.


Right after that quote:

"The key is ensuring that any future cuts at NASA are not indiscriminate. If and when Jared Isaacman is confirmed by the US Senate as the next NASA administrator, it will be up to him and his team to make the programmatic decisions about which parts of the agency are carrying their weight and which are being carried, which investments carry NASA into the future, and which ones drag it into the past. If these future cuts are smart and position NASA for the future, this could all be worth it. If not, then the beloved agency that dares to explore may never recover."


NASA is a public service agency which literally makes no money or profit - by design.

What are they even on?


Ketamine.


You pay an annual % tax on the value of your investments less debt as of January 1st. This means you still pay taxes if your assets lose value, too. It's a wealth tax that pretends to be a capital gains tax.


It doesn't pretend to be a capital gains tax at all. It's a tax on income from assets, which is in practice more or less a 'wealth tax' which is also why it's called the Dutch word for 'wealth tax' in the first place.


It is a tax on an assumed return on assets, determined as a set percentage of wealth. "Vermogensrendementsheffing" means a "tax on return on wealth", not on the wealth itself. In name it is not a wealth tax, but in reality it is, since the assumed return that is taxed has no relation to the true return. This relates to the recent decisions declaring this partially unlawful, see e.g. https://www.tilburguniversity.edu/magazine/supreme-court-net...


Ok. I'd quibble about it being a capital gains tax at all, but anyway.


Are there any specific podcasts you would recommend?


For history stuff, I just love: https://historyofeverythingpodcast.com

The creator got started on TikTok doing weird history trivia with is wife and has started a podcast after that.

For my "going to sleep" podcasts, I use "The Indicator from Planet Money" (the ads are a bit distracting at times), "More or Less" (facts about numbers in the news), "Robot or Not" and "Ungeniused". All are long enough to fall asleep to, but interesting enough to start listening.


History of Rome

Revolutions

The History of Bizantium

The British History Podcast


Paracetamol and alcohol is actually not a dangerous combination at all as far as the liver is concerned. That is why there is no warning against combining the two in the information leaflet that comes with it. Paracetamol is not toxic, but its intermediate metabolite NAPQI is. The enzyme that converts paracetamol into NAPQI is the same that breaks down alcohol, and it has a higher affinity for alcohol meaning that it will be too busy working on the alcohol to turn the paracetamol into toxic NAPQI.

Long-term alcohol abusers will develop more of this enzyme, so they are more likely to get liver damage from paracetamol though.


When you evaluate an ML approach, you should use one part of the data to train your model and a completely separate part to evaluate it. Otherwise, your model can just memorize parts of the data (or overfit in some other way), resulting in artificially high performance. Data leakage is when there is a problem in this separation and you somehow use information about the evaluation dataset in the model training process. The table in the article lists various examples. The simplest would be to just not have a separate evaluation set. A more subtle one is if you normalize your input data based on both the training and evaluation sets; this way the normalization will be better suited to the evaluation set than it should be if you had no knowledge of it, resulting in artificially high performance.


Great, now could you please email the authors and explain to them how to explain "lekage" in their draft paper, so the rest of us can read it also?


E.g. on Google if you search for "how to tie a tie", a little info box may pop up with step by step instructions. This content is taken from some website, but that website gets no page hits or ad revenue. Instead, Google gets to serve ads on the search engine results page.

(I don't know if this happens for this specific example, but Google does this for some searches)


> that website gets no page hits

Part of why sites participate in the infobox program is that in practice you do get quite a lot of hits from it: many people click through to see the answer in context.


Ok, I just tried but don't see that info box but that is exactly what i was asking for. Thank you very much, did not think of that.


Amsterdam is #9 in the report (scoring 0.1 lower than Toronto), but not mentioned in the article for some reason.


This isn't exactly what you are talking about, but this art/concept for self-powered student housing came to mind: https://www.humanpowerplant.be/human_power_plant/human-power...


Travelling by train is more sustainable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: