Hacker Newsnew | past | comments | ask | show | jobs | submit | vackosar's commentslogin

Egan is great. Another good book is Incandescence. Only problem is that some of his novels contain some kind of loneliness. Can't describe it.


I see only the problem of no ability to change the OKRs, otherwise the core idea of making a plan is right. Changing the goals should introduce a additional change process, because priorities change with new knowledge. I haven't see that done well yet. Maybe just having 6 weeks cycle OKR [1] instead could work better as in Shape Up?

[1] https://basecamp.com/shapeup/2.2-chapter-08


I use mapy.cz often, because of the UI and better maps. Only the traffic is probably not estimated the best.


I read about criticism of General Relativity, that I haven't verified so grain of salt please. The criticism is in that the equations are so complex that only the simplest limits of them are studied and so the theory is therefore not verified.


https://vaclavkosar.com/ Mostly on machine learning research


True problem is not technical and it is right there in the article. Non-academics "speak less and worry more". So their knowledge is likely scattered and not distilled. Building these devices is probably quite simple in the end, it is just the story is not being told.


Correction! Cost is around $10M not $10B.


Sounds interesting! Would you link to that or describe them here? Thanks!


A very simple one is "can you write a program that might never terminate?"

If a neural network does a fixed amount of computation and that is that it is never going to be able to do things that require a program that may not terminate.

There are numerous results of theoretical computer science that apply just as well to neural networks and other algorithms even though people seem to forget it.

Another is "can an error discovered in late stage processing be fed back to an early stage and be repaired?" That's important if you are parsing a sentence like

   Squad helps dog bite victim.
It was funny because I saw Geoff Hinton give a talk in 2005, before he got super-famous, and he was talking about the idea that led to deep networks and he had a criticism of "blackboard" systems and other architectures that produced layered representations (say the radar of an anti-aircraft system that is going to start with raw signals, turn those into a set of 'blips', coalesce the 'blips' into tracks, interpret the tracks as aircraft, etc.)

Hinton said that you should build the whole system in an integrated manner and train the whole thing working end-to-end and I thought "what a neat idea" but also "there is no way this would work for the systems I'm building because it doesn't have an answer for correcting itself.


I'm by no means an expert, but a lot of choices machine learning algorithms make are more about training parallelization than anything. In many ways it feels like something like a recursive neural network or some architecture even more weird should be better for language, but in practice it's harder to train an architecture that demands each new output depend on the one before. Introducing dependencies on prier output typically kills parallelization. Obviously this is less of a problem for say a brain that has years of training time, but more of problem if you want to train one up in much less time using compute that can't do sequential things very quickly


You're assuming here that there are discrete stages that do different things. I think a better way to conceptualise these deepnets is that they're doing exactly what you want - each layer is "correcting" the mistakes of the previous layer.


Most "deep" networks are organized into layers and information flows in a particular direction although it doesn't have to be that way. Hinton wasn't saying we shouldn't have layers but that we should train the layers together rather than as black boxes that work in isolation.

Also, when people talk about solving problems they talk about layers, layers play a big role in the conceptual models people have for how they do tasks even if they don't really do them that way.

For instance in that ambiguous sentence somebody might say it hinges on whether or not you think "bite" is a verb or a noun.

(Every concept in linguistics is suspect, if only because linguistics has proven to have little value for developing systems that understand language. For instance I'd say a "word" doesn't exist because there are subword objects that depend like a word "non-" and phrases that behave like a word (e.g. "dog bite" fills the same slot as "bite"))

Another ambiguous example is this notorious picture

https://www.livescience.com/63645-optical-illusion-young-old...

which most people experience as "flapping" between two states. Since you only see one at a time there is some kind of inhibition between the two states. Who knows how people really see things, but if I'm going to talk about features I'm going to say that one part is the nose of one of the ladies or the chin of the other lady.

Deep networks as we know it have nothing like that.


Yes, of course, thanks!


Correction!! The model costed around 10M not 10B! Thanks for raising that. Mistake during copying from the second slide :(


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: