Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Philosophers are mostly unaware of artificial neural networks. The game has changed, you can understand a lot about the human mind if you understand AI. Don't get too stuck in the past. How about an objection to what I said? A case where someone is conscious but without continuous propagation of neural signals? Or something


You didn’t provide enough of a hypothesis to seriously discuss.

> A case where someone is conscious but without continuous propagation of neural signals?

That would be irrelevant. All known conscious beings are made up of biological cells, but that doesn’t prove that all conscious beings must be made of biological cells, or that biological cells are the key causative factor in consciousness. The same goes for “continuous propagation of neural signals.”

You described a personal conjecture as though it solved a known hard problem, even throwing in the word “just” as though the solution is really simple. This is a lot like the Feynman quote about quantum mechanics: if you think you understand it, you almost certainly don’t. You may not even have recognized the core problem yet. The original Chalmers paper is a good place to start: https://consc.net/papers/facing.pdf

But coming at it from a computational perspective, in some ways it’s even easier to see the problem. We don’t generally assume that a deterministic, non-neural net program written in say Python has a conscious subjective experience. To use Nagel’s terminology, there is “nothing it is like” to be that program. But, an LLM or any other computational neural net is no different from a program like that. It’s executing deterministic instructions, like a machine, because it is a machine. We can speculate about consciousness being some sort of emergent property that arises in such systems given the right conditions, but that’s all it is: speculation.

And it’s completely unclear what those right conditions might be, or how those conditions could possibly give rise to conscious experience. Why aren’t humans philosophical zombies with no conscious experience, just reacting to input like machines? No-one has succeeded in getting past the conjecture stage in answering that question.


I am prepared and want to discuss seriously every one of my view points. The initial comment was just the abstract. I am extremely confident in my world view about Deep Learning and cognitive ability. And the reason for that is because I generally try to avoid doing what you just did, that is, reading what other people think regarding this subject. I instead choose to ground my views in real world experiments and information I have gathered and experienced. This primarily consists of an enormous amount of experimentation with Deep Learning models, both inference and training. My views come mostly from that. I don't recite Andrej Karpathy or Ilya Sutskever. I don't even care about their opinions for the most part. I experiment with the models to such an extreme degree that I understand very well how they behave and their limitations. And I believe if you are going to create a breakthrough, this is the only way to do so.

> an LLM or any other computational neural net is no different from a program like that

I don't think so. A program doesn't exhibit highly complex abstract thought in a very high-dimensional space.

> It’s executing deterministic instructions, like a machine, because it is a machine

Its true that LLMs are deterministic. But do you really think that the magic behind the brain is only due to temperature and randomness? Do you really think that non-deterministic behavior is the magic ingredient that makes up what we are referring to as consciousness? I could inject noise into an LLM at every parameter dynamically during inference. The output would come out just fine. After all, LLMs are high dimensional and can handle a little noise. Would it really be more conscious after that? You can find experiments where people remove entire layers of the LLM and it still works fine. A little noise would be even less harmful than that.

You see, when I'm arguing, I'm not citing what some other person said. I at most will cite experiments from other people and their results. When I am contradicting your arguments, I present you a reality you can go and try in the real world and verify. You can go verify yourself LLMs exhibit complex high-dimensional thought. You can verify yourself that if you inject noise dynamically through inference on every parameter, you still get coherent output from the LLM.

So, if you are willing to continue this discussion, I ask of you that you present some sort of "probing" of the real world and the respective "reaction" of the same real world as arguments. That is what finding the truth means.

And lastly. I am presenting a Theory. This means I believe that my points form a foundation that makes my theory stronger than yours. It means I have better evidence that backs it up. It doesn't mean I have proved what consciousness is. Instead, it primarily means I can make more accurate predictions using my theory on real world scenarios involving artificial and biological neural networks. And my personal experience shows me that is true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: