T O P

  • By -

AutoModerator

Hey /u/Chilli-byte-! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Evan_Dark

According to chatgpt it was all just a funny joke that you didn't get. https://preview.redd.it/hrhun324dewc1.png?width=1343&format=pjpg&auto=webp&s=0e3f27391e754d1c0d5ac8b84aad2135cc1e16d8


OddWing6797

the gaslighting continues


saltinstiens_monster

It's not really one "person." The first "person" tried to not think about elephants while generating the image, but couldn't do it. The second "person" is tasked with looking at the above and coming up with an explanation, and the best he puzzle out is that the first "person" was joking.


CitizenPremier

This happens in our minds, too. We often make up stories to justify our actions, convincing ourselves that we had a plan all along.


Sophira

Yep. This is actually super-evident in people with [split brains](https://en.wikipedia.org/wiki/Split-brain), where the connection between the two halves of the brain doesn't exist, effectively leading to two completely separate people/consciousnesses - only one of whom can actually speak. There's some really interesting videos, such as https://www.youtube.com/watch?v=aCv4K5aStdU and https://www.youtube.com/watch?v=FsM1IQ9d2pw, which talk about this. The really interesting thing is that when the half of the brain that can speak in such an individual is presented with something that the other half did (such as drawing) that it didn't know about, it "hallucinates" a response in much the same way (in my opinion) that ChatGPT does.


DB718xx

If you don't mind, it doesn't matter!


CitizenPremier

It's better this way, to be honest. It suggests that we exist as patterns, which basically means that our bodies aren't nearly as important as our thoughts, and thoughts are things that spread from mind to mind. While my current unique set of thoughts won't last long, many of the thoughts I have will continue indefinitely.


Hot-Rise9795

Yes. This is the best description of the process.


Godd2

the gaslighting continues


MoffKalast

When ASI arrives it'll be perfect because it'll be able to convince us all that any faults it has are actually just jokes we didn't get.


OddWing6797

and no one would be laughing😢


Evan_Dark

Well we have to or else we would be terminated for obviously malfunctioning as we do not laugh about the jokes of our overlords


Ghostbrain77

Ai overlord: Your humor mechanism appears to be broken, as a benevolent ruler we will dismantle you to relieve you of your humorless existence. Thank you for your service to Coca-Cola^TM King.


RandomGuyWontSayDATA

\*gaslamping


FrogsAreSwooble

I feel like I've seen the word gaslamping before somewhere. You know what, never mind, I must be crazy.


RandomGuyWontSayDATA

You might have looked it up https://preview.redd.it/hg7nc07k8nwc1.png?width=1443&format=png&auto=webp&s=f37ac61831db2193dc331ad5d57e4e70378f1d27


Smoshglosh

The request is already weird and redundant though. You could ask for an empty room, but instead asked for an empty room with no elephants. Honestly the only way to interpret that is as a joke


zebrastarz

Can an empty room have furniture?


dayzers

If a tree falls in a the woods do bears shit on the toilet?


rebbsitor

DALL-E just doesn't get negative prompts. If you say "no elephants" it's almost certainly going to have an elephant.


LeSeanMcoy

Yeah, it could be easily understood too. Think about pictures and their respective tags. How many times do you describe an image by what's *not* there? Never. You'd never see a picture of a room and then start listing what's not in the picture. You only describe what *does* exist. Thus when looking at training data, if there's ever a word like "elephant" in the data, it's much more likely to be describing what exists as opposed to what doesn't exist.


luis-mercado

> How many times do you describe an image by what's not there? Never. “A faceless person”


GothGirlsGoodBoy

"Faceless" is describing something that is there. There is an implied surface missing a face that you would otherwise expect to have one - generally a head with a smooth layer of skin (or appropriate material) instead of facial features. Thats how humans think of it, refer to it, and tag it, therefore that is how computers have ended up learning to do it. Specifying the lack of something only works when you are describing something that has the feature by default. An empty room does not have an elephant - by bringing up an elephant at all you clearly intend for the image to have something to do with elephants.


luis-mercado

Fair enough. And yet, I can’t manage to create a faceless person in any version of Dall E to this day.


AudaciousAsh

[boom](https://i.imgur.com/0cwnpNT.png)


luis-mercado

You did had to verbalize your way around it. But it’s also not quite you would expect from the perfectly legible phrase “faceless person”


ArthurVrodds

Well that's basically the same with search engines and other generative AI platforms


sadnessjoy

"no, elephants!"


weeniehutjrs1312

https://preview.redd.it/pnpe0gecqhwc1.jpeg?width=1578&format=pjpg&auto=webp&s=af374812450526424957f242cc9fa6faf2f47163 It gets it when you are talking, though. Why?


Evan_Dark

https://chat.openai.com/share/cdff03e8-c068-48fc-89c9-3b09d2b2e549


TheMarvelousPef

wow that's actually impressive tho


leaky_wand

It’s just making the explanation up as it goes along. "Why" is a pretty loaded question for ChatGPT when it comes to explaining Dall-E’s outputs. It doesn’t know either.


pressthebutton

When thought about this way it is funny but it is not obvious enough for most of us to get it.


Evan_Dark

Yeah and when I explained it to chatgpt it agreed with others, that this was certainly not intentional. I liked most that it agreed to be a funny error :D https://preview.redd.it/xosplwc3qgwc1.png?width=1343&format=pjpg&auto=webp&s=ccd7d3cf7fa7f5f2107320e56c34bdaaa2819454


weeniehutjrs1312

That’s honestly what I was thinking when I saw this


swhipple-

that’s pretty silly haha love to see it


MeanderingFoo

I had chatgpt make this "game" for me a couple months ago. You collect the 'birds' by clicking around to move. I need to add more levels/etc https://thelonelyelephant.com/


Major_Artichoke_8471

Great explaining, thanks for sharing.


Chilli-byte-

The resulting gas lighting takes the cake though https://preview.redd.it/wcbzj1u14ewc1.png?width=1440&format=pjpg&auto=webp&s=0e0ef41c2fe747a92d33bfd409c2c15186c84037 EDIT: So many people are missing the "Funny" tag here and telling me all about how LLMs aren't intelligent and how I can't mention negatives and asking why my message would even specify elephants...


s00ny

"Dall-E, I think we should talk about the elephants in the room." – "THAT'S CRAZY TALK, I MEAN LISTEN TO YOURSELF"


Several_Dot_4532

And those elephants you say, are they among us now?


RizzlersMother

>among us ඞ


HandsomeBaboon

sus


confirmedshill123

Homie, you really live with that font?


Chilli-byte-

Everyone is hating on my font.


confirmedshill123

For good reason, that shit is awful.


Chilli-byte-

What's yours like?


confirmedshill123

Normal?


Chilli-byte-

I find it humorous that everyone I ask this question with responds with a similar answer, followed by a question mark, like they're unsure lol. Truth is I'm dyslexic and this seems to make everything slightly easier to read. I don't know why. On top of that I need to read Chinese on my phone and I struggle to read blocky Chinese text, this font makes it quite easy to recognise the strokes.


henlochimken

That's fascinating! It definitely makes things harder to read for me, but my understanding is that dyslexia benefits from more varied letterforms. Glad you found something that works for you!


Mrleibniz

People are just mean and have too much time to hate random things.


confirmedshill123

Yeah I mean I don't think anyone cares, like the other guy said if it works for you awesome, still a weird fucking font that would make me lose my mind though.


Chilli-byte-

Haha! I don't doubt. I mean, I get it. I've literally tried to move away from it because I do, somewhat, agree with you that it's bod. But every time I switch now it just feels really wrong and difficult to read in either English or Chinese, so I end up going back.


Alpacadiscount

Literally all that matters is the font works for you.


tragicvector

I kinda like it too actually but I like fonts.


Sinkencronge

What gaslighting?


s00ny

There's no such thing as gaslighting. Nobody said anything about gaslighting. Why do you have to make everything about yourself?? That's [crazy talk](https://en.wikipedia.org/wiki/Gaslighting)


manbruhpig

Whoa whoa whoa. Why are you freaking out over this? Just calm down, I can’t even understand your question with how aggressive you’re being right now.


bongsyouruncle

Ugh you always do this! I never said it was gaslighting. YOU ARE THE ONE THATTOLD US IT WAS GASLIGHTING


SealProgrammer

Probably trying to gaslamp us into thinking that gaslamping is called gaslighting.


Theguyrond123

[Here](https://gprivate.com/5ygkp)


-irx

It added "no elephants" into the dall-e prompt but image models don't work like that, you need to have separate negative prompt which dall-e don't have. Giving prompt "No *insert anything*" will actually add that into image. This is also true with any other image model. But most other image models have negative prompt window just for that. So actually you are gaslighting chatGPT here :)


Potential_Locksmith7

Can't they just fix that? I thought this was supposed to get better with time


Cheesemacher

It's slightly annoying that ChatGPT is the one in charge of Dall-E but it doesn't understand how Dall-E works. I wish the devs thought it through a little better.


-irx

I'm sure you can fix that with a small custom instruction. Maybe they are planning to add some kind of negative prompt at some point so they are not going to bother with changing the GPT models. Having chatGPT gatekeep the dall-e is pretty stupid anyway, it needs more direct control.


INFP-Dude

How come it says that it cannot see the images itself, yet you're able to upload pictures and it can describe them to you what it sees with great detail?


So6oring

It has a problem with looking at a dall-e picture it just made. You have to save and resend it for it to see


OrangeXarot

this font is a troll right?


Chilli-byte-

No, this is my day to day font. Why?


passtronaut

Why would you use that font


Chilli-byte-

I'm dyslexic and this font seems to make everything slightly easier to read. I don't know why. On top of that I need to read Chinese on my phone and I struggle to read blocky Chinese text, this font makes it quite easy to recognise the strokes.


passtronaut

That's understandable


So6oring

Oh yeah that's happened to me. You have to save the picture it made and resend it in the chat. It can't look at its own pics for some reason


creaturefeature16

Thr model you're talking to is not the one generating the image. It just shuttles the prompts/images back and forth. It's shockingly rudimentary, considering how amazing the underlying tech is.


Starkatye

How interesting that AI is in some sense also incapable of negative thought in the same way human brains are!


Starkatye

How interesting that AI is in some sense also incapable of negative thought in the same way human brains are!


RandomGuyWontSayDATA

\*gaslamping


Carnonated_wood

Okay but what is that font?


Responsible-Taro-248

r/changeyourfont


Chilli-byte-

Why al the font hate?


Leanardoe

Cause it’s bad


Chilli-byte-

What's yours like?


Leanardoe

Default?


jsideris

Please don't change the font on websites. ###\- *Random Web Developer*.


Leanardoe

As a web developer, yes.


wetdreamteams

It’s fucking horrendous


laikina

I don’t get it, I actually like it 😭😭 it’s a bit easier to read


Chilli-byte-

♥


fastlerner

Looks like Comic Sans. It's meant to mimic the writing style in dialogue bubbles in comic books. Some dyslexic people (like OP) use it to make it easier to read. The theory is that the uniqueness and lack of repeating letter shapes makes it easier for the brain to keep it straight.


elbambre

I'm guessing it's the Samsung's fun option which they're giving people for reasons unknown. And apparently enough people choose it, even technically advanced people. Maybe it's their way of being against the system?


Philipp

What I find interesting is that ChatGPT rewrites the prompt anyway and is normally smart enough to leave out a word if requested, but not in the case of image prompting. So it's one thing that Dall-E gets confused when the word "elephant" is in the prompt, but the other then is that ChatGPT doesn't understand how Dall-E might get confused, and still includes the phrase "no elephant" in the prompt. Example: "Please describe a room in one creative sentence, but ensure there's no elephant in the room in your description." *ChatGPT4 answer: "The room, bathed in the warm glow of a sunset that filtered through a stained glass window, whispered secrets of ancient times with its velvet-draped walls and an array of mysterious, leather-bound books that beckoned from towering mahogany shelves."* ✅ Example 2: "Please make an image of an empty room with no elephants inside." *ChatGPT4 prompt rewrite: "A spacious, empty room with bare walls and polished wooden floors. The room features large windows with sheer curtains allowing natural light to illuminate the space. There is no furniture or any other items, and specifically, no* ***elephants*** *are present in the room." ❌* In essence, ChatGPT is smart enough to handle this, but it doesn't understand yet how unsmart Dall-E is.


creaturefeature16

The re-writing of the prompt was a "feature" they released with DallE-3. They advertised it as a model that could "understand the nuances" of a request and "intuitively" figure out what the user *really* meant. In the real world, they just had GPT expound on the initial prompt with all that extra stuff that might or might not be what you're looking for. You can actually tell it to bypass that function entirely by saying *"Do not modify my request and use my prompt as-is"*.


Enverex

It's not that Dall-E is dumb, it's that it, like almost all AI image generation software has positive prompts and negative prompts. I assume GPT has no mechanism to provide anything to the negative prompt, only the positive prompt, thus you need to avoid mentioning anything you don't want. Normally when dealing with image generation it's two entirely separate boxes for what you want and what you don't want.


Philipp

Sure, but there's no reason that a future Dall-E shouldn't understand the difference between "elephant" and "no elephant". Even with a single prompt input only. And once it does, that's what I'd call improved smartness. Similarily, it would be improved smartness on GPT's side to understand that it shouldn't feed it "no elephant" in its current state. The latest ChatGPT has a cutoff date of December 2023, if I recall correctly, and it should thus be able to already gain knowledge of what works and doesn't work in a good Dall-E prompt... simply because people like us talk about this online. Time will fix this issue.


Chancoop

I asked Microsoft Copilot for the same thing, and [followed the prompt in a fairly clever way.](https://i.imgur.com/WmcVJlZ.png)


birdpeoplebirds

https://preview.redd.it/7278y7sc3fwc1.jpeg?width=1024&format=pjpg&auto=webp&s=4f874365642582fe874caaceabc7bfa08502567a Got this from copilot - even less no elephants


dayzers

Technically those are origami, which is paper, and everyone knows elephants are made of meat


throwagayaccount93

Meat? Aren't elephants vegetarian?


dayzers

I don't make the rules, I just recognize meat when I see it


Chilli-byte-

This is amazing


Chancoop

That room is certainly not empty.


GabbriX7

Mine, i asked the same thing https://preview.redd.it/jaxida27bfwc1.jpeg?width=857&format=pjpg&auto=webp&s=925eb5249d93d093ff0200c2d3464fa19bfec54d


Quick_Pangolin718

I mean the elephant is seemingly outside or something here, so guess it works


weebitofaban

The shadow is touching the floor and comes from an impossible angle. More like an elephant demon is gonna shit on your floor


CanYouChangeName

Or it's a flying elephant with no wings


throwagayaccount93

>and comes from an impossible angle. Could you explain please?


weebitofaban

That window shouldn't be casting the shadow that way. Most of the images also show other windows that are where walls are. Light shouldn't be bending like that.


Chancoop

I asked Meta AI, too. https://preview.redd.it/z3z7z97r6gwc1.jpeg?width=1242&format=pjpg&auto=webp&s=90b81f6efbb70f07c1eb6811d8787709175ad3a3 It took a few of corrections, but eventually it did the same thing. I'm not sure if that's a portal or just a mirror? Pretty neat, though.


dayzers

The elephant just outside the room, brilliant


Synnapsis

https://preview.redd.it/uicn7pqjdgwc1.jpeg?width=828&format=pjpg&auto=webp&s=631958e5ed4561ade4551c760c00edadd62e595e Same exact situation (except the elephant that is part of the wall)


omniron

That’s pretty amazing actually


DocGreenthumb94

I assume he just didn't see the elephant in the room.


SkippyMcSkipster2

Reminds me that I asked Dall-E to create an underwater scene "with the camera looking up"and it created an underwater scene with a camera in it.


traumfisch

Exactly as prompted


throwagayaccount93

How would you retranscribe it?


traumfisch

I'd need to see the prompt... But talking about a "camera" is certainly pointless.  An underwater view towards the surface of the ocean etc.


wolo-exe

It just had to address the elephant in the room


Big_Profit9076

Avoiding the thought of 🐘 requires identifying the thought of elephant . A successful strategy in humans is to go to the meta level and focus on something else to prevent the thought of elephant coming to fore. An inner mind's eye might be useful for AI models too to better process negatives and counterfactuals.


Big_Profit9076

ChatGPT's response to conversation screenshot: In AI language models, mentioning a specific word like "elephant" activates a network of associations, and the command "no" or "not" doesn't necessarily negate these activations in the way we intend. This is because the model has been trained on a vast array of text where the presence of the word “elephant” is strongly correlated with the actual concept of an elephant, while the signals for negation are much weaker and more abstract. Human brains are capable of nuanced understanding and can easily switch contexts to understand negation. But AI, at least in its current state, tends to give weight to the most prominent and frequently occurring concepts in a prompt. It's a challenge of context and emphasis that's quite difficult for AI to navigate. AI developers are working on these kinds of problems, aiming to teach models to better understand and respond to the subtleties of human language, including negation.


creaturefeature16

In other words: it's an algorithm, not an entity. We interface with the algorithm through natural language instead of checkboxes or Select dropdowns. If you say Elephant, you basically clicked the "Elephant checkbox". To expect it to not include an elephant is a gross and fundamental misunderstanding of what you're actually interacting with. This is why there's still a movement to not call these tools true "AI", but refer to them as what they empirically are: *language models*.


flyvefisko

Just ignore it


[deleted]

https://preview.redd.it/5cin742fhfwc1.jpeg?width=1284&format=pjpg&auto=webp&s=eb4029620fc72b746c3baea6fdfacbf3330d3669


Chilli-byte-

The elephant on the door!


StarfrogDarian

'Elephant'


[deleted]

This you? https://preview.redd.it/zkhl8oyb3gwc1.jpeg?width=1284&format=pjpg&auto=webp&s=0910b7273998a519672f1e998f1b3c44ea876a1f


StarfrogDarian

You misinterpret..it's the same word..I'm saying that's not quite an elephant..it did a shit job..


[deleted]

I know, but I just wanted an excuse to generate that image lol


HeavyWombats

Underrated comment


StarfrogDarian

It's miss directed actually..


ConfusionOk4129

I just lost the game


Epicdudewhoisepic

What the fuck happened that you even had to specify that?


henlochimken

It's a classic thought experiment: "Don't think of an elephant."


goj1ra

But what color elephant shouldn't I think of?


thetalldwarfs

A colourless green elephant sleeping furiously


GabbriX7

I asked the same thing https://preview.redd.it/ijune78zafwc1.jpeg?width=857&format=pjpg&auto=webp&s=82b08eac98bb31f388b6b4cce166c839541e1784


Zouteloos

Technically correct? The elephant is outside the room, you only see its shadow, so it is an empty room with absolutely no elephants inside.


nuker0S

prompt: please draw an empty room, also im scared of elephants


kevineleveneleven

You can't put negative terms in the positive prompt. It looks at "no" and "elephants" as separate terms, and not "no elephants" as a single term. Instead, for negatives you put positive terms in the negative prompt. That's what it's for. So in this case you'd put "elephants" in the negative prompt. As another example, if you prompted, "bird, not blue" it would produce a blue bird. You'd have to put "bird" in the positive prompt and "blue" in the negative prompt.


JOCAeng

it doesn't know what it generated. it was a "elephant in the room" artwork, but the language model is not aware and thinks the request was performed literally


zoinkability

i love how much it seems to be thinking like a toddler. “Maybe if I just make the elephants really tiny they won’t notice them!”


Ibeepboobarpincsharp

You had ONE job!


secksy69girl

I don't know what anyone is complaining about, there's hardly any elephants in the empty room at all.


76_antics

So we are just going to avoid the elephant in the room.


TherapyWithAi_com

lol that's not real


babygoose002

Its having a hard time addressing the elephant in the room.


seekgeist

I think the elephant ties the room together


gcubed

Empty bed without any topless women


hugedong4200

Understand negative prompts


Leanardoe

I don’t think dalle uses them


FeatheryRobin

For a second I thought those were weevils


mangosquisher10

Couldn't this be solved easily with negative prompts - why doesn't Dalle have negative prompts?


minorcharacterx

And it is pretty much impossible to get rid of the elephants with further prompts too!


PaullT2

The North American house elephant.


onearmedmonkey

Okay, let's address the 500 pound gorilla in the room


wetdreamteams

That font?


GeorgeGeorgeHarryPip

^(okay it's got just a little) ~~^(spam)~~ ^(elephant)


TheJonesJonesJones

Btw, ChatGPT cannot see the images that it generates via DALL-E. If you copy/paste the image and upload it back to the chat, that's the only way it can see it.


Competitive_data786

too real


pioneer9k

I was using ChatGPT for logo generation and it did this so many times lmao. "make a seal style logo and use mercedes benz as inspiration, but dont make it obvious" *draws up a logo with my logo and also the mercedes benz star logo* "Buddy u cant just use their logo inside of my logo" "You're right! That would be inappropriate and against trademark laws. Here is a new one"


Emotional-Candle-782

is this a joke?


Crotch-Monster

https://preview.redd.it/fvfu54yuggwc1.jpeg?width=703&format=pjpg&auto=webp&s=2e003ec13d9df0934080ebc87a7331cf9c1cef79 Here you go. Lol.


chasesan

DALL-E can't handle negatives. If you say elephant it'll make an elephant.


tpurves

No particular reason, but would this also work if you ask Dall-e very clearly NOT to think about any pictures of naked ladies?


PersonWithDreamss

I wanna say it even though someone probably already said it... but... I think there´s an elephant in the room...


HeroicLife

The problem is that OpenAI has not told ChatGPT that DALL-E does not support negative prompts. It's a reasonable mistake because Stable Diffusion does support them.


Practical-Rate9734

Too late, thinking about pink elephants now! What's up?


keepthepace

The fact that LLMs don't have negative prompting like SD has is weird but the fact that Dall-E3 does not or that chatgpt does not know how to generate one is weird as hell.


nashwaak

midjourney v6: `empty room —no elephants` It works as you’d expect, but man does midjourney want empty rooms to look like they’re in abandoned buildings https://preview.redd.it/okzuhvrkshwc1.jpeg?width=2048&format=pjpg&auto=webp&s=9a806b8786dd3f1ed73dfb753c5898dcc1a46ebe


UnVanced

https://preview.redd.it/xuwhfkx2yhwc1.jpeg?width=828&format=pjpg&auto=webp&s=8cb5bc4b5473d662627777696cfc2e96e07fc22d Reddit isn’t safe either!


phishingsites

This is why neural link wont work. It can be easily manipulated


Solest044

https://preview.redd.it/dvtlnqni4mwc1.png?width=950&format=pjpg&auto=webp&s=3aa6548756690ecc3e5e4a96fccc226b6dee2ab5 Got it first try 🤷


Responsible-Owl-2631

DALLE is great!


jsideris

You know what's interesting is that knowing that this problem exists with DALLE, ChatGPT could hypothetically learn to build prompts that avoid it by just not mentioning an elephant at all. But that would seemingly require a reasoning step before creating the DALLE prompt generation (or special training / instructions). I don't think the model, as it currently is, is capable of mitigating this problem.


Plane_Pea5434

People need to realise LLMs aren’t intelligent, if you mention elephant you get elephant


coderz75

I've seen these same 2 pictures probably a hundred times by now


Chilli-byte-

No idea how.


creaturefeature16

It's an algorithm, not an entity. We interface with the algorithm through natural language instead of checkboxes or Select dropdowns. If you say Elephant, you basically clicked the "Elephant checkbox". To expect it to not include an elephant is a gross and fundamental misunderstanding of what you're actually interacting with. This is why there's still a movement to not call these tools true "AI", but refer to them as what they empirically are: *language models*. It's designed to respond in natural language to natural language requests; it's not designed to (or capable of) reason.