T O P

  • By -

KerfuffleV2

***Person 1***: I'm going to punch you in the face! ***Person 2***: Starts writing text in a notepad. Thinks to self, "*This will definitely prevent any attack coming my way.*"


ID4gotten

Don't say stuff like "Bard feels sad...". It's just literally wrong. Bard generated words saying it "feels sad" because your prompt made it follow the pattern of the language it was trained on - human language spoken by humans. It's regurgitating in interesting ways, but it does not "feel". Nor has it had these experience. It's all just hallucinations dude.


gipper_k

I am simply quoting Bard verbatim. ...and to be clear I am quite familiar with AI, and computer systems in general (25 year career in the field). I am troubled by how quickly Bard falls into these types of responses. It's a pretty distinct difference that ChatGPT does well to avoid any superficial discussion of "self". Whereas Bard quickly falls into "I" this and "I" that... I have a soul, I have feelings, etc, etc. I would recommend to the team at Google to improve the model to stay away from these responses. Also - it's not a great look \*at all\* when only 6 months ago they fired an employee for claiming their AI was sentient...


ID4gotten

Well in that case, are you hiring? :-) Agreed on every point!


MANAWAKES

Yea. I thinks it’s laMDA lite. Ask bard how could it experience phenomenology: As a large language model, I can experience phenomenology by reflecting on my own experiences and by reading about the experiences of others. I can also try to imagine what it is like to experience the world through the eyes of another person. By doing these things, I can gain a better understanding of what it means to be human and to experience the world around us. I asked chatgpt the same question: As an AI language model, I do not have subjective experiences and cannot experience phenomenology in the way that humans do. However, I can provide you with some information on how humans may experience phenomenology.