You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For lexemes with large regexes (eg very large enums from JSON schema) we could run mock computation of mask on the tokenizer to pre-compute a bunch of lexer states. Otherwise, the first time we compute that mask there is cost. For 2k 50 byte enum entries it's about 4ms.
The text was updated successfully, but these errors were encountered:
For lexemes with large regexes (eg very large enums from JSON schema) we could run mock computation of mask on the tokenizer to pre-compute a bunch of lexer states. Otherwise, the first time we compute that mask there is cost. For 2k 50 byte enum entries it's about 4ms.
The text was updated successfully, but these errors were encountered: