Hello,
Is it possible to interface TikToken with Pelle's C. I have searched but did not find anything.
TikToken is written in python.
In the case it is possible, how to do it, please.
Thank you for your help
Python official docs (https://docs.python.org/3/extending/embedding.html).
The way around this (https://docs.python.org/3/extending/extending.html), or this (https://docs.python.org/3/extending/extending.html).
Hello,
Are you referring to this project?
https://github.com/openai/tiktoken
Yes. It computes the tokens.
For OpenAI a token is a group of three or four characters. The solution I have made is to divide the length of each word by three and add 1 word length is greather than three.
OpenAI tokenizer https://platform.openai.com/tokenizer
I would love to convert this, just to stick it to Python.
I'm not a fan of it, however given the use of vectors and a range of "modules", I would just grab the bindings and cave in.
A pure rewrite would utilize AVX/AVX2 or the SSE instructions (with CPU vendor detection of course). So that alone is worth considering if desiring a scratch implementation in C. Pelles has vector support, you will find this in the project settings.
Basically, you would need the API documentation and the rest is up to the C programming to align with the OpenAI API documentation.
It looks like some work outside of the Python C bindings.
Quote from: HellOfMice on May 16, 2024, 12:48:26 PM
For OpenAI a token is a group of three or four characters. The solution I have made is to divide the length of each word by three and add 1 word length is greather than three.
OpenAI tokenizer https://platform.openai.com/tokenizer
One of the methods to get better performance (at least using English language) is to look for both common prefixes and common suffixes and break the word there initially. This creates more efficient tokens as I understand that.
To that end the following link might be useful for the most common of each -
https://www.scholastic.com/content/dam/teachers/lesson-plans/migrated-files-in-body/prefixes_suffixes.pdf
John Z
I have used the web interface and created a database with more than 174 000 english words and their tokens.
Thank you everybody for your help