Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

None of this seems obvious just reading the original Attention is all you need paper. Is there a more in-depth explanation of how this adaptive weighting works?


The audience of this paper are other researchers who already know the concept of attention, which was very well known already in the field. In such research papers, such things are never explained again, as all the researchers already know this or can read other sources, which are cited, but focus on the actual research questions. In this case, the research question was simply: Can we get away by just using attention and not using the LSTM anymore? Before that, everyone was using both together.

I think learning it following it more this historical development can be helpful. E.g. in this case here, learn the concept of attention, specifically cross attention first. And that is this paper: Bahdanau, Cho, Bengio, "Neural Machine Translation by Jointly Learning to Align and Translate", 2014, https://arxiv.org/abs/1409.0473

That paper introduces it. But even that is maybe quite dense, and to really grasp it, it helps to reimplement those things.

It's always dense, because those papers already have space constraints given by the conferences, max 9 pages or so. To get a better detailed overview, you can study the authors code, or other resources. There is a lot now about those topics, whole books, etc.


What books cover exclusively about this topic ? Thanks


This is frequently a topic here on HN. E.g.:

https://udlbook.github.io/udlbook/ (https://news.ycombinator.com/item?id=38424939)

https://fleuret.org/francois/lbdl.html (https://news.ycombinator.com/item?id=35767789)

https://www.fast.ai/ (https://news.ycombinator.com/item?id=24237207)

https://d2l.ai/ (https://news.ycombinator.com/item?id=38428225)

Some more:

https://news.ycombinator.com/item?id=35543774

There is a lot more. Just google for "deep learning", and you'll find a lot of content. And most of that will cover attention, as it is a really basic concept now.


Thanks for the udl book (Understanding Deep Learning), that looks like a really great starting point.


To add to the excellent resources that have already been posted, Chapter 9 of Jurafsky and Martin's "Speech and Language Processing" has a nice overview of attention, and the next chapter talks specifically about the Transformer architecture: https://web.stanford.edu/~jurafsky/slp3/


I doubt any.


It’s definitely not obvious no matter how smart you are! The common metaphor used is it’s like a conversation.

Imagine you read one comment in some forum, posted in a long conversation thread. It wouldn’t be obvious what’s going on unless you read more of the thread right?

A single paper is like a single comment, in a thread that goes on for years and years.

For example, why don’t papers explain what tokens/vectors/embedding layers are? Well, they did already, except that comment in the thread came 2013 with the word2vec paper!

You might think wth? To keep up with this some one would have to spend a huge part of their time just reading papers. So yeah that’s kind of what researchers do.

The alternative is to try to find where people have distilled down the important information or summarized it. That’s where books/blogs/youtube etc come in.


Is there a way of finding interesting "chains" of such papers, short of scanning the references / "cited by" page?

(For example, Google Scholar lists 98797 citations for Attention is all you need!)


As a prerequisite to the attention paper? One to check out is:

A Survey on Contextual Embeddings https://arxiv.org/abs/2003.07278

Embeddings are sort of what all this stuff is built on so it should help demystify the newer papers (it’s actually newer than the attention paper but a better overview than starting with the older word2vec paper).

Then after the attention paper an important one is:

Language Models are Few-Shot Learners https://arxiv.org/abs/2005.14165

I’m intentionally trying to not give a big list because they’re so time-consuming. I’m sure you’ll quickly branch out based on your interests.


I found these notes very useful. They also contain a nice summary of how LLMs/transformers work. It doesn't help that people can't seem to help taking a concept that has been around for decades (kernel smoothing) and giving it a fancy new name (attention).

http://bactra.org/notebooks/nn-attention-and-transformers.ht...


It's just as bad a "convolutional neural networks" instead of "images being scaled down"


“Convolution” is a pretty well established word for taking an operation and applying it sliding-window-style across a signal. Convnets are basically just a bunch of Hough transforms with learned convolution kernels.


I struggled to get an intuition for this, but on another HN thread earlier this year saw the recommendation for Sebastian Raschka's series. Starting with this video: https://www.youtube.com/watch?v=mDZil99CtSU and maybe the next three or four. It was really helpful to get a sense of the original 2014 concept of attention which is easier to understand but less powerful (https://arxiv.org/abs/1409.0473), and then how it gets powerful with the more modern notion of attention. So if you have a reasonable intuition for "regular" ANNs I think this is a great place to start.


Turns out Attention is all you need isn't all you need!

(I'm sorry)


softmax(QK) gives you a probability matrix of shape [seq, seq]. Think of this like an adjacency matrix with edges with flow weights that are probabilities. Hence semantic routing of parts of X reduced with V.

where

- Q = X @ W_Q [query]

- K = X @ W_K [key]

- V = X @ V [value]

- X [input]

hence

attn_head_i = (softmax(Q@K/normalizing term) @ V)

Each head corresponds to a different concurrent routing system

The transformer just adds normalization and mlp feature learning parts around that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: