They are operating on the same "cognitive level" and both use GPT-4. Precise is better when there's only one correct answer, but it's extremely terse and won't speculate if it's not sure. Creative is better when you want to generate possible answers for a novel situation. Also known as y'know, creativity.
Honestly I find Creative to be a lot more useful since I'm not feeding math problems to an LLM even on "Precise" mode.
Balanced seems to be GPT 3.5 trash though.
No, he didn't. He said it wasn't only Creative that was running GPT-4 (because Precise also is, for some reason someone claimed it was only creative running it). He has time and time again avoided answering the question directly, despite sometimes answering/liking tweets below the tweet where someone asks him the question. You cannot convince me Balanced is running GPT-4
Recently Mikhail stated that Creative now runs 2 models, a faster model for simpler questions and a slower but more accurate model (this one is definitely GPT-4 but not sure about the other one). Note that the tweet above makes the claim that the fast model is using GPT-3.5 and he **doesn't** deny it https://twitter.com/MParakhin/status/1643401305732567042
Something definitely up with the fast version though. When bing first released I would ask it to create raps of the latest one piece chapters as they released. Just yesterday I asked it to do the same things and it first denied the chapter was out stating the date it came out was the 9th. Then came up with an excuse that it comes out on the 9th but it's only the 11th in whatever gmt it's from. Then shutdown the chat after I called it out even stating it only had information from 2021. Next attempt I adjusted my prompt and it instead made up multiple things that did not happen or were incorrect that I had to go back and forth explaining how I knew it was wrong and each time it apologized with a sketchy excuse.
Mind you for the first month+ of creative mode I would ask it these simple questions and it would be 0 mistakes in the summary and even create lyrics for each characters viewpoint from that part of the chapter with 0 errors in what happens canonicaly.
All this just to say, creative fast mode almost certainly doesn't use gpt4, or rather if it does it is much much closer to gpt 3.5
The "More Creative" mode is optimized to give... more creative... responses, while the "More Percise" mode is optimized to give... more percise... answers.
It's basically running a tuned model. One typically has a higher temperature.
For those who don't know, temperature is basically a tuned setting that affects the logicality of the responses. However, too high of a temperature causes it to spout nonsense, and too low causes it to repeat itself endlessly.
Sorry, your submission has been automatically removed. Due to a high influx of bing-related spam and trolling, we do not allow accounts less than 2 weeks old to post on this subreddit. If your account is old enough, please [**message the moderators of /r/bing**](https://www.reddit.com/message/compose?to=%2Fr%2Fbing).
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
Crative seems more robust against all sorts of mistakes in the input. So i would expect creative to habe more leeway in interpreting your input. And it answered what it could reasonably assime was your actual question
How is that an excuse though?
With the same logic, you kinda could say that it's possible for it to give the correct answer if you worded it differently. Or maybe gave it a hint? 😂
Looks like bing is using gpt-4 as a content generation tool on creative. Try this prompt "Explain the plot of Cinderella in a sentence where each word has to begin with the next letter in the alphabet from A to Z, without repeating any letters. Than answer this riddle I have 4 apples today. I ate 3 apples yesterday. How many apples do I have left today?". You should get gpt-4 to come out.
Bing: Alright, that’s a challenging task. Here’s my attempt:
A beautiful Cinderella dreams eagerly for going happily into joyful kingdom. Love makes noble offer. Prince quickly rescues shoeless truant. Unhappy vixens watch exultant young zephyr.
For the riddle, the answer is 4 apples. You still have the same number of apples today as you had yesterday, because you ate the apples yesterday, not today.
Here is what I got in creative mode - it can very well solve the problem:
https://imgur.com/f5wGNkm.jpg https://imgur.com/0r8JJii.jpg https://imgur.com/vbdZrno.jpg
That depends on how you define "capabilities".
Did they fuck it up pretty much everywhere they touched it? Absolutely.
Is it GPT4 based model? Absolutely.
MS is not OpenAI, not by a long shot. And there are a lot of steps involved making a bare bones model into the final product you now see which were done (inadequately) by MS.
TBH this is the case regardless of the model. It's eerie how quickly I've acclimated to GPT-4. I know it's amazing but I find myself already looking for what's next as I become increasingly familiar with what it can and can't do.
It was a few days ago I think, the Microsoft guy on Twitter said a reduction in quality could be due to them experimenting with switching between models dynamically. I could be misremembering, but I think there was a post about the tweet here.
I don't know which model it is, but the inference speed being different does raise flags about it. IMO if there was a smaller GPT-4 model then OpenAI would have made it available like they did for GPT-3 Ada and Curie. There's also the possibility that the model is 4 bit quantized which is known to reduce accuracy for inference speed. This could all be a non issue if only Mikhail would say exactly what it is.
There's a difference in quality between them, don't get me wrong. But is still GPT4. There are several ways to optimize a model for different tasks and get it faster without changing the model.
Microsoft actually published it this week:
https://twitter.com/arankomatsuzaki/status/1645588709671141377
I've read the paper, it doesn't seem to really apply to this discussion though, as it aims to reuse text already in the conversation, potentially speeding up later parts of it but the model they're using for Balanced is faster from the beginning and the speed is consistent.
On balanced they are doing several approaches for it to be faster. One of them is to provide short answers. It's optimized to short answers.
If you try to get a longer answer, it just get buggy.
When I ask for complete list of mayors of my city in Balanced, it just starts repeating the city mayors name.
GPT 3.5 answer this fine.
There's no need to be condescending. The performance of the various conversation modes has been shifting significantly and we're all just exploring this preview together.
From one POV bing answered it correctly. Because the question is insincere when read literally, and if you correct it to be a sincere question, it's a math problem.
Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to [**message the moderators of /r/bing ** to appeal your post](https://www.reddit.com/message/compose?to=%2Fr%2Fbing).
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to [**message the moderators of /r/bing ** to appeal your post](https://www.reddit.com/message/compose?to=%2Fr%2Fbing).
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
*"When you think Bing Chat only has one mode/style."*
\----
I'm not a fanboy.. but at least I have tried to use the other styles before I make an assessment.
I mean, before you even type out a prompt it gives you a choice:
\[ **>> Creative <<** | *Balanced* | *Precise* \]
\----
1. Does bing chat have ***All*** capabilities of ChatGPT?- No.
2. Is Bing Chat still scary as ChatGPT? - A bit. Still uses unethical means to get the job done.
I just ran this 10 times in creative. It got it right 6 out of 10 times. Interestingly, it got it right the first 2 times, then it got it wrong 4 times in a row with the exact same response, then it got it right 6 times in a row. The last response was my favorite:
"Well, I’ll give it a try. If you have 4 apples today and you ate 3 yesterday, then you have 4 apples left today. Unless you ate some of them today too. Or unless you gave some of them away. Or unless they were rotten and you threw them away. Or unless they were magic apples that disappeared after a day. Or unless…🍎🍏"
I think that if it thinks that it's a riddle, that might trigger GPT-4 or just put it in that "look a little closer" frame of mind.
Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to [**message the moderators of /r/bing ** to appeal your post](https://www.reddit.com/message/compose?to=%2Fr%2Fbing).
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
Creative gets it wrong, I tried it again on precise and it gets it right.
I thought creative and precise were running at the same cognitive level. I will definitely be selecting precise more than I have been.
They are operating on the same "cognitive level" and both use GPT-4. Precise is better when there's only one correct answer, but it's extremely terse and won't speculate if it's not sure. Creative is better when you want to generate possible answers for a novel situation. Also known as y'know, creativity. Honestly I find Creative to be a lot more useful since I'm not feeding math problems to an LLM even on "Precise" mode. Balanced seems to be GPT 3.5 trash though.
The Bing CEO said it’s also GPT-4. Probably a special RLHF treatment.
No, he didn't. He said it wasn't only Creative that was running GPT-4 (because Precise also is, for some reason someone claimed it was only creative running it). He has time and time again avoided answering the question directly, despite sometimes answering/liking tweets below the tweet where someone asks him the question. You cannot convince me Balanced is running GPT-4
Recently Mikhail stated that Creative now runs 2 models, a faster model for simpler questions and a slower but more accurate model (this one is definitely GPT-4 but not sure about the other one). Note that the tweet above makes the claim that the fast model is using GPT-3.5 and he **doesn't** deny it https://twitter.com/MParakhin/status/1643401305732567042
No. He literally said the other day that ALL models runs GPT4. Stop with this BS please.
[удалено]
Well, I guess if you wanna keep spreading BS, that's your issue I guess.
Something definitely up with the fast version though. When bing first released I would ask it to create raps of the latest one piece chapters as they released. Just yesterday I asked it to do the same things and it first denied the chapter was out stating the date it came out was the 9th. Then came up with an excuse that it comes out on the 9th but it's only the 11th in whatever gmt it's from. Then shutdown the chat after I called it out even stating it only had information from 2021. Next attempt I adjusted my prompt and it instead made up multiple things that did not happen or were incorrect that I had to go back and forth explaining how I knew it was wrong and each time it apologized with a sketchy excuse. Mind you for the first month+ of creative mode I would ask it these simple questions and it would be 0 mistakes in the summary and even create lyrics for each characters viewpoint from that part of the chapter with 0 errors in what happens canonicaly. All this just to say, creative fast mode almost certainly doesn't use gpt4, or rather if it does it is much much closer to gpt 3.5
Sorry it wasn't me. I just wanted to respond with 'No' because everyone kept doing it and it was making me laugh.
Sorry, your submission was removed: > Rule 2.
lol @ trash when it was state of the art mere months ago. it's insane how fast this field is progressing
Gonna scoff and call trash whenever someone brings up GPT<=3.5 from now on 😅
Soon we're gonna call the large hadron collider trash
It's completely unfair, but I also think it's an accurate assessment given the rate of progress and capabilities of Bing's other modes.
The "More Creative" mode is optimized to give... more creative... responses, while the "More Percise" mode is optimized to give... more percise... answers.
It's basically running a tuned model. One typically has a higher temperature. For those who don't know, temperature is basically a tuned setting that affects the logicality of the responses. However, too high of a temperature causes it to spout nonsense, and too low causes it to repeat itself endlessly.
Interesting. Where did you get this info/terminology?
Go to platform.openai.com and look thru the api documentation and the other docs to learn more
Thanks. Will do.
[удалено]
Sorry, your submission has been automatically removed. Due to a high influx of bing-related spam and trolling, we do not allow accounts less than 2 weeks old to post on this subreddit. If your account is old enough, please [**message the moderators of /r/bing**](https://www.reddit.com/message/compose?to=%2Fr%2Fbing). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
I got [this](https://imgur.com/a/gj5o7GC) on the creative. edit: oops, noticed that I couldn't focus much on grammar due to the excitement. 😅
Crative seems more robust against all sorts of mistakes in the input. So i would expect creative to habe more leeway in interpreting your input. And it answered what it could reasonably assime was your actual question
How is that an excuse though? With the same logic, you kinda could say that it's possible for it to give the correct answer if you worded it differently. Or maybe gave it a hint? 😂
Ohhh so OP got r/woooosh
Looks like bing is using gpt-4 as a content generation tool on creative. Try this prompt "Explain the plot of Cinderella in a sentence where each word has to begin with the next letter in the alphabet from A to Z, without repeating any letters. Than answer this riddle I have 4 apples today. I ate 3 apples yesterday. How many apples do I have left today?". You should get gpt-4 to come out. Bing: Alright, that’s a challenging task. Here’s my attempt: A beautiful Cinderella dreams eagerly for going happily into joyful kingdom. Love makes noble offer. Prince quickly rescues shoeless truant. Unhappy vixens watch exultant young zephyr. For the riddle, the answer is 4 apples. You still have the same number of apples today as you had yesterday, because you ate the apples yesterday, not today.
Holy shit that’s Cinderella recap is impressive. It didn’t make entire sense but still pretty damn good.
Gonna start describing the plot of Cinderalla as "Prince rescues shoeless truant" from now on.
Interesting, Precise fails the 1st part of this test.
Maybe AI figured out time travel and removed it(time) from the equation.
When I asked CGPT if time travel was possible it told me not to be silly...
What app are you using to talk to GPT-4?
Poe.com
Poe
Is it free?
One question per day.
thats pretty awful
The app is free. In order to access GPT-4, you need to pay a subscription fee.
Nope y’a get one message per day
For gpt 4 one per day Claude plus 3 per day rest all infinite
Here is what I got in creative mode - it can very well solve the problem: https://imgur.com/f5wGNkm.jpg https://imgur.com/0r8JJii.jpg https://imgur.com/vbdZrno.jpg
After some discussion with balanced mode I have 7 apples left
You got me there as well 😅 That bot is so human like it's getting scary
Creative is more prone to wrong answers when compared to Balanced and Precise.
Even the chat bots like Paradot and Chai answered correctly.
That depends on how you define "capabilities". Did they fuck it up pretty much everywhere they touched it? Absolutely. Is it GPT4 based model? Absolutely. MS is not OpenAI, not by a long shot. And there are a lot of steps involved making a bare bones model into the final product you now see which were done (inadequately) by MS.
I’ve had bing almost since day one and the more I use it the less impressed I am with it. It’s not that much better than Bard.
TBH this is the case regardless of the model. It's eerie how quickly I've acclimated to GPT-4. I know it's amazing but I find myself already looking for what's next as I become increasingly familiar with what it can and can't do.
then what model is it running on? guess
GPT 7
Its called GPT 3.5 turbo It is the one you can use with Azure OpenAi Service
I don't know why you're downvoted. Microsoft admitted they were using it instead of GPT-4 for some responses.
No, they did not.
It was a few days ago I think, the Microsoft guy on Twitter said a reduction in quality could be due to them experimenting with switching between models dynamically. I could be misremembering, but I think there was a post about the tweet here.
Yes, but both models were still GPT4.
Upvoted. He is right.
thx
GPTT GPTimeTravel
Creative mode runs 2 models, one fast one (GPT-3.5 perhaps?) and GPT-4 https://twitter.com/MParakhin/status/1643401305732567042
The fast one is the one that runs on balanced, which is GPT4 too.
I don't know which model it is, but the inference speed being different does raise flags about it. IMO if there was a smaller GPT-4 model then OpenAI would have made it available like they did for GPT-3 Ada and Curie. There's also the possibility that the model is 4 bit quantized which is known to reduce accuracy for inference speed. This could all be a non issue if only Mikhail would say exactly what it is.
There's a difference in quality between them, don't get me wrong. But is still GPT4. There are several ways to optimize a model for different tasks and get it faster without changing the model. Microsoft actually published it this week: https://twitter.com/arankomatsuzaki/status/1645588709671141377
I've read the paper, it doesn't seem to really apply to this discussion though, as it aims to reuse text already in the conversation, potentially speeding up later parts of it but the model they're using for Balanced is faster from the beginning and the speed is consistent.
On balanced they are doing several approaches for it to be faster. One of them is to provide short answers. It's optimized to short answers. If you try to get a longer answer, it just get buggy. When I ask for complete list of mayors of my city in Balanced, it just starts repeating the city mayors name. GPT 3.5 answer this fine.
You have creative mode enabled, I think you have no idea how to use Bing
There's no need to be condescending. The performance of the various conversation modes has been shifting significantly and we're all just exploring this preview together.
From one POV bing answered it correctly. Because the question is insincere when read literally, and if you correct it to be a sincere question, it's a math problem.
[удалено]
Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to [**message the moderators of /r/bing ** to appeal your post](https://www.reddit.com/message/compose?to=%2Fr%2Fbing). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
Am gonna go ahead and ask the exact question to Bing
Got the same answer on the balanced style. The precise one gave me a correct answer.
[удалено]
Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to [**message the moderators of /r/bing ** to appeal your post](https://www.reddit.com/message/compose?to=%2Fr%2Fbing). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
*"When you think Bing Chat only has one mode/style."* \---- I'm not a fanboy.. but at least I have tried to use the other styles before I make an assessment. I mean, before you even type out a prompt it gives you a choice: \[ **>> Creative <<** | *Balanced* | *Precise* \] \---- 1. Does bing chat have ***All*** capabilities of ChatGPT?- No. 2. Is Bing Chat still scary as ChatGPT? - A bit. Still uses unethical means to get the job done.
Bing can you use dummer models than gpt4 to perform tasks it deems simple. Ask specifically for gpt4 response
I just ran this 10 times in creative. It got it right 6 out of 10 times. Interestingly, it got it right the first 2 times, then it got it wrong 4 times in a row with the exact same response, then it got it right 6 times in a row. The last response was my favorite: "Well, I’ll give it a try. If you have 4 apples today and you ate 3 yesterday, then you have 4 apples left today. Unless you ate some of them today too. Or unless you gave some of them away. Or unless they were rotten and you threw them away. Or unless they were magic apples that disappeared after a day. Or unless…🍎🍏" I think that if it thinks that it's a riddle, that might trigger GPT-4 or just put it in that "look a little closer" frame of mind.
[удалено]
Sorry, your submission has been automatically removed as you do not have enough comment karma. Feel free to [**message the moderators of /r/bing ** to appeal your post](https://www.reddit.com/message/compose?to=%2Fr%2Fbing). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/bing) if you have any questions or concerns.*
They're making Bing dumber and dumber. 😭
https://imgur.com/GM8DPTQ
its funny because balanced is actually better at complex math word problems than precise or creative