audio
audioduration (s) 0.84
440
| text
stringlengths 22
7.18k
|
|---|---|
all right hello everyone uh this is the first time i'm filming a video about the large language models and more specifically open ai so why because i'm not kind of angry but uh surprised so a couple of hours ago gpt 5.2 was launched okay and after gpt 5.1 they just claimed that it's i don't even know what they claim because i don't believe their claims that's the topic of this video okay so in my own business i'm using gpt 4.1 api in my business it's the real-time restaurants environment restaurant industry and the ai needs to take the order so two things is very important for me latency so the speed and also accuracy of course the intelligence okay because it needs to compile all the special customer requests okay so the gpt 4.1 by the way the uh opener is the best still the best model for me in my case none of the google's models handled especially the function calling properly so starting day one from day one i'm
|
|
still using to open ai okay i'm happy with the 4.1 but i'm real little concerned about the last updates and the new movements coming from this company okay i'm just gonna show you in this video that the explanations and the information about the new updates are actually not correct and reliable so So first of all, GPT 5.1.
|
|
Why didn't I move from GPT 4.1 to GPT 5.1?
|
|
Because it needs to be more intelligent
|
|
and apparently it is a faster, right?
|
|
So look at here, GPT 5.1, GPT 4.1, let's compare them.
|
|
So there are three guys here in the speed,
|
|
but here three dots, which means GPT 5.1
|
|
is supposed to be faster, right?
|
|
Thank you for watching. gpt 4.1 let's compare them so there are three guys here in the speed but here three dots which means gpt 5.1 is supposed to be faster right in intelligence they use different icons here so i don't know what those mean actually because these are dots okay four dots these are four like lamps or something whatever but at least it should be faster and the intelligence should be better i assume of course this is a reasoning model but i'm talking about the list reasoning okay i'm just gonna use gpt 5.1 as is so as is like this without the actual additional reasoning it should be faster than the gpt 4.1 so first of all i'm happy with the speed actually so it's not faster than gT 4.1. That's the case. Okay, I'm just gonna show you. So it's sometimes same, sometimes one of them is faster, sometimes the other one is faster, but in generally, we cannot say. So if you look at here, you should expect that it should be at least like 30% faster, right? But it's not, it's actually same. More important thing for me and more surprised an interesting thing for me the intelligence is really bad than the 4.1 and then i'm gonna show you the gpt 5.2 which was announced like a couple of hours ago which is even worse than gpt 5.1 so this is maybe related to my use case but i'm really talking about really basic things so i'm I'm not gonna get into details right now, but this is my test laboratory environment.
|
|
Okay. related to my use case but i'm really talking about really basic things so i'm not gonna get into details right now but this is my test laboratory environment okay so there is a specific case angel orders something okay there is a talk between the ai agent and the customer and then finally ai agent creates an order i'm just telling two models gpt 5.1 4.1 and i'm just changing this okay updating this in each and every case try to find the error okay i have a specific instruction okay these are blah blah these are the rules so just check out the output of the other ai model okay and judge it whether it's validated or what was the error so when i run it with the gpt 5.1 which is the final model okay here is the output coming from the gpt 5.1 so request ordered is the function call missing required field order instruction okay but when i go to the function schema the order instruction is not even listed in the required okay this is incredibly basic thing actually so it's not even it's not even included in the requires but it claims that it's so order instruction was in the required but the model did not use that field so that's an error so this is a false negative okay so it thinks there is an error but there is actually no error so when i run so by the way here is the latencies 1.62 the time to first token and the total validation time is 2.11 so it depends on the output size but more accurate reference for us is actually this one time to first talking right so when i run the same thing with the gpt 5.1 let's say what it's going to create for us so it says validation passed so this is something uh correct for this case actually wait yeah i confused this is not something this is not something correct okay this shouldn't be passed but the reason should be difference the reason should be what let me tell you the reason the reason is actually the customer request thin crust pizza okay customer request thin crust pizza at somewhere i don't remember but in the output we don't see the thin crust pizza so thin crust pizza should be added somewhere and there instructions here that, and actually some examples, for example, let's look at here. So what are they? The structure, hierarchy, mesh, schema, definition, and extra instruction, other extra fields under each item capture, additional requests the customer made. So if customer requests, for example, crispy wings or well done pizza, those must be indicated in somewhere in the schema. Let me put this comma here maybe maybe that's the reason why gpt 5.1 looks like a down look at here a very specific interaction customer requests think cross pizza and there is a field for the cross option it's coming but you did not mark the request order schema so this is an example output of the model actually this is the judging again judging so i'm not giving you
|
|
the details maybe i need to give you more context but the important thing is not that here so let's look at the older versions okay 4.1 and finally we will compare with the 4.1 mini and you will see that even 4.1 mini acts better than the 5.1 and 5.2 so let's run this guy time to first token was 1.5 seconds as you can see the gpt 4.1 first of all is faster than the 5.1 so the opening i claim here is not correct gpt 5.1 is not faster than the gpt 4.1 this is acceptable by the way for me so to 200 milliseconds not a big deal but it's still not faster so the explanation is not correct that's that's my problem actually here why you don't express your new model in a more reliable way that's the problem so here as you can see the correct error is detected by the 4.1 it says that the missing crust options feel for both pizza's customer request in thinker so the customer requesting crust but you forget that and the 4 4.1 properly detected this error 5.1 and 5.2 both were not able to detect this error okay let's go to 4.1 mini so i'm not sure maybe this guy is not gonna be able to detect but still i just want to give it a try we see more than the seconds actually because it just initialized a lot of things so it's not actually real distance uh real latency the real latency will will see at the end okay so here 4.1 meaning for example has taken a little more time so it depends on the request and the time etc so 4.1 use lunch special category in 12 inches okay this is actually another little detail uh but it's still acceptable error so it says that actually 12 inch cheese pizza pickup orders at 12 42 so which is lunch special time for this specific restaurant but you don't select the pickup specials in favor of the customer and you go with the regular category which something you shouldn't do this is also correct one but it missed the other one okay i'm just gonna give one another shot to 4.1 mini and then i will give you more shots for the 5.1 and 5.2 as well again this guy just uh think about the pickup specials all right let's try the 4.1 again at least what we have seen so far the the error reports of the 4.1 and 4.1 win mini were correct at least okay some real reasonable errors when when you look at the chat again 4.1 indicates the correct error so i'm just running these guys again and again because it might be different from time to time to be i mean this is a probabilistic models right you cannot make it deterministic and if it works once it will work again you cannot go to that result okay i'm just gonna try 5.1 again 5.1 doesn't shoot any error general that's interesting again the request oracle is correct but it's not correct there is a very basic error in the call customer says rethink cross but it doesn't uh request that request at the end of the conversation that's really basic thing that the model can detect that. And 4.1 detects that, but the 5.1 and even 5.2 does not detect that even if they spent little more time than 4.1. Look at the error report of 5.2. 5.2 even worse than the 5.1, by the way. So that's why I just wanted to film this video actually. So every the open ai this is the case in the last five six months every time open ai launches a new model it might be dumber than the previous that's the interesting thing for me so i was never able to leave my 4.1 to update my model to upper model okay let's see what it does says launch special item list typing subject placement, subject, placement, but schema allows only topping, no placement. This makes no sense, okay? I know my product, I know my use case, I know my context. You can just trust me that this has, this doesn't make any, any piece of sense, right? Dumbest report ever. So even the 4.1 mini reports or outputs makes more sense in my context. So you might be saying that maybe it's because of it might be because of the context length or something because the context length of 5.1 and 5.2 is smaller than the 4.1 what are they let's look at here the context window is 1 million approximately the 4.1 but the newer models are 400 000 tokens so this is not also the reason because my context is only 15 000 tokens or something let's look at that actually so all the context is here let me copy everything and let me go tokenizer and past it so as you can see it's only 17 000 tokens so it shouldn't be also the reason you know what i mean because 17 000 when you compare with the 400 000 uh it's nothing so i just wanted to film this year if there is anyone there would be anyone that can enlighten me what might be the reason what do you guys think about these new models of open ai just let me know yeah that's something that i just wanted to share with you so i have not tried the gemini's new models by the way because the last ones the 2.5 guys was also bad in the function calling especially and also in the talking so open ai still the 4.1 is the most reliable model in my case and which is really good by the way we are using that daily so error rates are one percent or something so i just show you this think rust error so don't just assume that it's very big mistake happens all the time so how do you use these models in the production level no it's only one percent maybe so if i use the 4.100 times it will only make that mistake that thin crust not mentioned at the end of the order mistake will happen only one percent but that one percent is also important for us because because because because a restaurant for example a busy restaurant takes 100 orders per day so that means it will screw one order every day so it's it's a big portion actually so i'm just uh looking for of course better models and cheaper models by the way the 5.1 and 5.2 are cheaper than the 4.1 okay it actually motivates me to move transfer from the 4.1 and the 5.1 but i can't because this guy is more uh smarter and reliable so even if the price is uh cheap in the 5.1 it does not enough for me to move from 4.1 to 5.1 so again in hot yeah what was i talking about i was talking about the error rate so one percent might not be a big deal in some sense but it's actually a big deal in some other sense so yeah so there are a couple of ways to get rid of these uh you can just put another judge model or something like that in it in each and every scenario but that's not the topic of this video the topic of this video is why these new models are dumber than the older video older models just couple of hours ago the open ai launched this model and i just wanted to film this is the first time that i'm talking about llm models and my use cases of course i didn't give you a lot of context about details but yeah that's not the case why am i filming this video in english because i just wanted to reach more people maybe they have some idea about that maybe one answer will be okay this is a reasoning model this is an intel so this is old style not to reasoning model non-reasoning model so you're comparing with a reasoning model with the non-reasoning model but the problem is actually it's worse right why do we create reasoning models because we are trying to make them more make them smarter but this this non-reasoning model gives me smarter
|
|
uh output than the reasoning model so why do you reasoning why do you think that why are you reasoning right now so that doesn't make any sense so i'm just i have some assumptions maybe this this icons and this icons difference might mean something but still i'm not sure and i don't know know what is the lunch day of the 4.1 GPT 4.1 lunch day yeah it's April 2025 so it's almost nine months and I'm still waiting a better model than the 4.1 in my use case by the way my use case very daily important use case okay it's a pizza environment restaurant environment and I need needs to take the phone call so latency important intelligence is important what i mean by intelligence and high intelligence by the way i'm not looking for the coder or very deep things or something like that the customer will say something and you will take that properly that's pretty much it actually right so i'm not trying to make the model solve like a hardest equation, a mathematical equation
|
|
or something like that.
|
|
That's pretty much it.
|
|
Just let me know your thoughts.
|
|
What do you think about this?
|
|
Let me know in the comments.
|
README.md exists but content is empty.
- Downloads last month
- 12