Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Testing GPT4 128K context length performance (twitter.com/gregkamradt)
3 points by ren_engineer on Nov 9, 2023 | hide | past | favorite | 1 comment


summary is that GPT4 struggles similarly to other large context models, stuff in the middle gets lost, so it doesn't seem like OpenAI has any secret sauce to fix this problem at the moment.

other main points:

- Performance starts dropping significantly after 73K tokens

- performance was worst at 7-50% of document depth

- Information at the beginning of the prompt was recalled regardless of context length




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: