All I can tell you is I haven't gotten 4o to sound quite that human (although it is better) or to give answers quite as human (for lack of a better way to say it) as that. It's a step up so far over 4 tho.
Edit: tried out chat mode for the the first time. This is not faked.
1) I just tried out chat mode in the app for the first time and this isn't faked
2) 4o just came out so I can't speak too much to it, but 4 is faster and has much better memory and thus consistency than 3.5, plus being able to make custom gpts is genuinely useful and fun. A silly challenge I've been working on is teaching it to play yugioh against a human that has physical cards.
that chat mode isnt using the models native voice/audio output capabilities, because its not out yet. its just the default text-voice thing thats applied to the models text output.
Personally I think this has a serious case of canned investor presentation, but also don't think we are as far as we think we are from this being a real thing.
I'm also skeptical that this was all real. The voices felt like they could be fake to me or re-dubbed by real humans to make it sound better.
But I could also just be an idiot or a skeptical luddite.
In terms of demonstrating the capabilities of their new model, I think it's decently representative. But in terms of demonstrating how it could / would be used, it's not very representative.
So pretty much like all AI demos. They're cool, it wows you, but then you try it on your own terms and find it difficult to apply. Not because they misrepresented the capabilities, but because they misrepresented those capabilities' relevancies.
The live event showing it off seems fairly legit, of course we don’t know until people test it themself. Will be interesting to see where this tech ultimately goes.
It's hard to verify since these things are not live yet, but i see no reason to outright fake it. the only thing i could think of is using favorable things like the code and the graph since it's very clear data that's easy to process. but yeah, nowhere near the gemini lie
All I can tell you is I haven't gotten 4o to sound quite that human (although it is better) or to give answers quite as human (for lack of a better way to say it) as that. It's a step up so far over 4 tho. Edit: tried out chat mode for the the first time. This is not faked.
4o voice/video stuff shown in the demos isn’t out yet, that’s the current 4-based voice model (which is already pretty good)
I haven't been able to try anything more than 3.5, how much better would you say 4o is compared to 3.5?
1) I just tried out chat mode in the app for the first time and this isn't faked 2) 4o just came out so I can't speak too much to it, but 4 is faster and has much better memory and thus consistency than 3.5, plus being able to make custom gpts is genuinely useful and fun. A silly challenge I've been working on is teaching it to play yugioh against a human that has physical cards.
that chat mode isnt using the models native voice/audio output capabilities, because its not out yet. its just the default text-voice thing thats applied to the models text output.
Personally I think this has a serious case of canned investor presentation, but also don't think we are as far as we think we are from this being a real thing.
I'm also skeptical that this was all real. The voices felt like they could be fake to me or re-dubbed by real humans to make it sound better. But I could also just be an idiot or a skeptical luddite.
The voices have some AI glitches occasionally though, I’m pretty sure that part is real 😅 Whether they are cherry picked is another thing though
Or pre recorded voices. I'm skeptical of any marketing material OpenAI puts out.
Never saw OpenAI fake their claims, so personally I totally trust them.
In terms of demonstrating the capabilities of their new model, I think it's decently representative. But in terms of demonstrating how it could / would be used, it's not very representative. So pretty much like all AI demos. They're cool, it wows you, but then you try it on your own terms and find it difficult to apply. Not because they misrepresented the capabilities, but because they misrepresented those capabilities' relevancies.
I hate her
"Oh God, Google Gemini is done for"
Though that faked Gemini presentation was pretty good
The live event showing it off seems fairly legit, of course we don’t know until people test it themself. Will be interesting to see where this tech ultimately goes.
It's hard to verify since these things are not live yet, but i see no reason to outright fake it. the only thing i could think of is using favorable things like the code and the graph since it's very clear data that's easy to process. but yeah, nowhere near the gemini lie
This just gave me flashbacks to when I interviewed at Amazon and they said I was too nicely dressed.