-
Notifications
You must be signed in to change notification settings - Fork 944
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add o3-mini support #374
Comments
Is it known which vocab file the tokenizer is going to use with this model? |
Based on what i know it is the same tokenizer as gpt 4o. they just added some extra stuff to it. from openai import OpenAI
client = OpenAI()
def generate_response(prompt: str, model: str = "o3-mini") -> str:
print(f"Generating response for {model} model...")
response = client.chat.completions.create(
model=model,
messages=[
{"role": "user", "content": prompt}
],
)
# print(response.choices[0].message.content)
print("input tokens:", response.usage.prompt_tokens)
print("reasoning tokens:", response.usage.completion_tokens_details.reasoning_tokens)
print("output tokens:", response.usage.completion_tokens - response.usage.completion_tokens_details.reasoning_tokens)
print("\n ---------------- \n")
return response.choices[0].message.content
messages = [
"Complete this sentence: The quick brown fox jumps",
"kajsh ekr jn as kemnralsekjr la sekjrl skejlsmcelkamc .skenrs",
"code, make sure you have installed the OpenAI Python"
]
for message in messages:
print("testing for message:", message)
generate_response(message)
generate_response(message, model="o1")
generate_response(message, model="o1-mini")
generate_response(message, model="o1-preview")
generate_response(message, model="gpt-4o")
here is the output:
notice that o3-mini and o1 have same input tokens, o1-preview and o1-mini have same and then we have gpt-4o. And all of them are offset by few tokens. I also did a similar test with a system prompt and there the difference was about 9 tokens, but always predictable. |
This is a pretty famous PIP library for tons of people why don't you just go through in the code and explicitly define parameters for every open a I model and the second there's news that open a I released a new model just find out the pricing and update your library.... If you're really busy maybe you could research on GPT how to advertise or promote your other interests along the way to make that possible... Using your famous Github page |
But in response to the other guy 03 mini and o1-mini don't have the same tokens it seems weird To have to sit here and do math... when it's easy to implement into the library but 03 mini and o1-mini have the same tokens input and output.... I mean there's this complex library I don't know all the parameters of it but it seems like there should be somewhere where you could explicitly define that 03 mini has $1.10 for input tokens and $4.40 for output tokens and just make a dictionary based on that and then Design your code around that dictionary so whenever there's like a new model or they change the prices then you just gotta update the price in the dictionary and then the code would just use that... |
Is that supposed to be in response to me? If so, I have no idea what this word salad is supposed to convey. |
OKI just took 5 minutes to do the math on the function and it appears when I give it the oh one mini model it outputs the same price as the 03 mini model so I'm just going to make all my code whenever he needs to calculate 03 many tokens use the OH-1 mini model parameter |
I'm sure you could glean the context if you read my other posts on the
issue that I posted on
…On Thu, Feb 13, 2025 at 7:06 AM Roberto Anić Banić ***@***.***> wrote:
This is a pretty famous PIP library for tons of people why don't you just
go through in the code and explicitly define parameters for every open a I
model and the second there's news that open a I released a new model just
find out the pricing and update your library.... If you're really busy
maybe you could research on GPT how to advertise or promote your other
interests along the way to make that possible... Using your famous Github
page
Is that supposed to be in response to me? If so, I have no idea what this
word salad is supposed to convey.
—
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWFQGIOA7JTRP62PIWOLJRT2PSYIFAVCNFSM6AAAAABWIKKQRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJWHA4TAMJZGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
[image: Nicba1010]*Nicba1010* left a comment (openai/tiktoken#374)
<#374 (comment)>
This is a pretty famous PIP library for tons of people why don't you just
go through in the code and explicitly define parameters for every open a I
model and the second there's news that open a I released a new model just
find out the pricing and update your library.... If you're really busy
maybe you could research on GPT how to advertise or promote your other
interests along the way to make that possible... Using your famous Github
page
Is that supposed to be in response to me? If so, I have no idea what this
word salad is supposed to convey.
—
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWFQGIOA7JTRP62PIWOLJRT2PSYIFAVCNFSM6AAAAABWIKKQRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJWHA4TAMJZGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
This is resolved in tiktoken 0.9 |
Thank you for your help!
…On Thu, Feb 13, 2025 at 10:04 PM Shantanu ***@***.***> wrote:
This is resolved in tiktoken 0.9
—
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWFQGIMXX4MKRKCP4YLSBY32PWBPDAVCNFSM6AAAAABWIKKQRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJYGM2TSOJWGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
[image: hauntsaninja]*hauntsaninja* left a comment (openai/tiktoken#374)
<#374 (comment)>
This is resolved in tiktoken 0.9
—
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWFQGIMXX4MKRKCP4YLSBY32PWBPDAVCNFSM6AAAAABWIKKQRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJYGM2TSOJWGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
adding the link of mapping the model's name to the encoding file just for clarity Line 8 in e35ab09
|
Thanks, I'll check it out!
…On Fri, Feb 14, 2025 at 9:37 AM Tarek Mahmoud Sayed < ***@***.***> wrote:
adding the link of mapping the model's name to the encoding file just for
clarity
https://github.com/openai/tiktoken/blob/e35ab0915e37b919946b70947f1d0854196cb72c/tiktoken/model.py#L8
.
—
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWFQGINMVV3MBN3GUPPOQKT2PYSVFAVCNFSM6AAAAABWIKKQRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJZHEYDQNZXHA>
.
You are receiving this because you commented.Message ID:
***@***.***>
[image: tarekgh]*tarekgh* left a comment (openai/tiktoken#374)
<#374 (comment)>
adding the link of mapping the model's name to the encoding file just for
clarity
https://github.com/openai/tiktoken/blob/e35ab0915e37b919946b70947f1d0854196cb72c/tiktoken/model.py#L8
.
—
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AWFQGINMVV3MBN3GUPPOQKT2PYSVFAVCNFSM6AAAAABWIKKQRCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJZHEYDQNZXHA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
o3-mini support is missing
The text was updated successfully, but these errors were encountered: