Sequence-Length
-
Context Window
Maximum number of tokens a language model can process in one pass; determines how much context the model sees. Typical values range from 512 to 128k tokens.
Maximum number of tokens a language model can process in one pass; determines how much context the model sees. Typical values range from 512 to 128k tokens.