Hacker Newsnew | past | comments | ask | show | jobs | submit | frklem's commentslogin

"Faced with such a marked defensive negative attitude on the part of a biased culture, men who have knowledge of technical objects and appreciate their significance try to justify their judgment by giving to the technical object the only status that today has any stability apart from that granted to aesthetic objects, the status of something sacred. This, of course, gives rise to an intemperate technicism that is nothing other than idolatry of the machine and, through such idolatry, by way of identification, it leads to a technocratic yearning for unconditional power. The desire for power confirms the machine as a way to supremacy and makes of it the modern philtre (love-potion)." Gilbert Simondon, On the mode of existence of technical objects.

This is exactly what I interpret from these kind of articles: engineering just for the cause of engineering. I am not saying we should not investigate on how to improve our engineered artifacts, or that we should not improve them. But I see a generalized lack of reflection on why we should do it, and I think it is related to a detachment from the domains we create software for. The article suggests uses of the technology that come from so different ways of using it, that it looses coherence as a technical item.


For each of the items discussed I explicitly mention why they would be desirable to have. How is this engineering for the sake of engineering?


True, for each of the points discussed, there is an explicit mention on why it is desirable. But those are technical solutions, to technical problems. There is nothing wrong with that. The issue is, that the whole article is about technicalities because of technicalities, hence the 'engineering for the cause of engineering' (which is different from '.. for the sake of...'). It is at this point that the 'idea of rebuilding Kafka' becomes a pure technical one, detached from the intention of having something like Kafka. Other commenters in the thread also pointed out to the fact of Kafka not having a clear intention. I agree that a lot of software nowadays suffer from the same problem.


I can't believe this article is being published in Nature. The article is flawed, plagued with assumptions that I guess the author doesn't even notice (like what do we really mean by AGI, the epistemological problems/assumptions to intelligence, the real nature of thinking, the real functioning of the human brain). It is really curious that the philosophical community is addressing the debate on what AI really is and its implications, but the computer science community does not read almost anything about philosophy. Regarding the fear of 'losing control of it', I would suggest reading the works (or at least about) of Gunther Anders and Bernard Stiegler. Technology (in this case AI) is inseparable from human being, to the point that we already lost control of technology, its use and its meaning (like, 100 years ago). Another thing that surprises me is how the computer science community is blind to the work of Hubert Dreyfus and other contemporary philosophers that analyze AI from and epistemological and philosophical perspective. But, actually, I should no t be surprised: we barely study philosophy in any scientific discipline when attending university. This rhetoric about how AI is similar to the human brain is starting to be a bit boring. It assumes a very simplistic view on the brain and turns a deaf ear to other types of research (like language acquisition and embodiment, mind/brain duality, epistemological basis for knowledge acquisition, ontological basis of causal reasoning...). And above all, what is really upsetting is the techno-optimism behind this way of thinking.


This is not a scientific paper that was published in nature by researchers, it is a news editorial written by an editor/journalist. Don't get fooled by the domain.


Yes. Why do people have a problem with Nature publishing the occasional 'article' instead of a 'paper'.


You are probably thinking of Nature Communications. This article was posted on their more pop-science publishing site.

You aren't the first one mislead by the Nature brand btw - if you look into past submissions you will find similar comments: https://news.ycombinator.com/from?site=nature.com


I know it is an article, and not a scientific publication, but that does not change the fact that the article is not serious at all regarding the ongoing discussion on AI. If this gets published in Nature, even as an opinion article, is because there is a general ideology that can actually produce this kind of content.


I agree and I think it's good that you pointed that out. Unfortunately the quality of nature.com articles is often quite bad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: