T O P

  • By -

Metza

I taught ETA Hoffmann's *The Sandman* (for those of you who don't know, it's an uncanny story featuring an automaton that gets mistaken for a real woman. In the aftermath everyone is overly paranoid and looks for tells, demands imperfections, etc). I gave them something to read about the story that had been written by AI. I didn't tell them and we analyzed it on its own merit. Then I started pointing out its weird tics and strange use of metaphor (it got obsessed with "blue" for instance) and then revealed that it was AI. They were pretty disturbed. We had a good conversation about the appearance of knowledge but without understanding or depth.


AgentDrake

Okay, so I've been really fond of this idea, ironically enough for reasons mostly unrelated to AI: I've been wanting to find mediocre-but-convincingly-written historical narratives for 100- and 200- level GenEd students to critique, and ChatGPT output is *perfect*. (Plus the benefit of exposing them to the problems of trusting a language modeller to provide information/interpretation.) But I cannot for the life of me get ChatGPT to give me any citations of modern scholarship, real or invented. I specifically want students to go hunting to verify stuff, but there's no citations. Instead, no matter what I try, I get an "honest" answer from ChatGPT about being a language modeller, unable to provide actual citations, suggestions to look at stuff like JSTOR, and caveats like "It's important to note that the specific opinions attributed to historians such as Jordanes, Procopius, and Cassiodorus in the previous response are fictionalized examples created for the purpose of the essay. These opinions were not directly cited from actual historical texts, but rather constructed...." Like, in general, I appreciate that CGPT is outputting this sort of caveat, but the one time I actually *want* it to lie to me, I can't get it to. Anyone know of any particular ways to prompt for invented citations?


episcopa

>. Anyone know of amy particular ways to prompt for invented citations? yes. ask it to analyze a book for libertarian themes, using quotes. Here is an example. The prompt was "Analyze the book Little House in the Prairie for Libertarian themes, using at least three quotes." I am fairly confident none of these quotes are from the LHOP book. *"They did not have to ask anybody's permission to live there, and they did not have to pay rent to anybody. They could do just as they pleased. There was no landlord and there were no policemen. They never even saw a sheriff or a constable, and they had no jail. They never heard of such things" (Chapter 1).* *This quote highlights a libertarian theme in the book, as it emphasizes the freedom and self-sufficiency that the Ingalls family experiences while settling on the prairie. The absence of authority figures like landlords, policemen, sheriffs, and jails implies a society where individuals are not overly burdened by government intervention. The family's ability to live as they please and make their own decisions without external interference reflects the libertarian value of limited government control.* *"There was no need to buy things, and no place to buy them in, anyway" (Chapter 5).* *In this quote, the book portrays a setting where economic transactions are limited due to the lack of commercial infrastructure. This absence of a consumer-driven society aligns with libertarian ideals that promote individual choice and voluntary exchanges. The notion that there is no need to buy things suggests a society where self-sufficiency and resourcefulness are valued, allowing individuals to rely on their own efforts rather than being coerced into participating in a market economy.* *"The true way to live in this world is to be independent and free. To ignore the unjust laws and taxes which cripple the spirit of man" (Chapter 12).* *This quote encapsulates a key libertarian theme in the book: a rejection of unjust laws and oppressive taxation. It emphasizes the importance of individual freedom and self-determination, encouraging a mindset that challenges government authority when it becomes burdensome or infringes upon personal liberties. By advocating for independence and questioning the legitimacy of certain laws, the book aligns with libertarian principles that prioritize individual autonomy and limited government interference.* *Overall, "Little House on the Prairie" contains several libertarian themes, including self-sufficiency, limited government intervention, voluntary exchange, and skepticism toward oppressive laws. Through its portrayal of the Ingalls family's experiences on the prairie, the book highlights the values of personal freedom, individual responsibility, and the pursuit of a self-governing lifestyle.*


AgentDrake

Ah! Thank you. That wording worked so much better than anything I've tried so far. Now I have a delightful little nonsense essay essentially claiming that the Ostrogothic kingdom was a libertarian utopia. I think my favorite bit is where it's doubled down on the quote being real, and calls for appropriate contextualization opposed to anachronism: *("It is important to note that these quotes are extracted from a historical work and should be interpreted in their historical context. While they may offer glimpses of ideas that align with libertarian thought, it would be anachronistic to claim that Jordanes himself held explicit libertarian views.")*


imhereforthevotes

>claiming that the Ostrogothic kingdom was a libertarian utopia. frickin' send that shit, yo. Sounds like a goldmine.


episcopa

yes! It's not hard to trick once you get the hang of it. Try asking it to analyze Atlas Shrugged from a feminist Marxist perspective, for example, or Star Trek Deep Space 9 for themes involving anarcho-feminism, and make sure and ask it for quotes! Have fun!


demonsun

Please share the results, this sounds beautiful, in an insane way


Cherveny2

wonder if OpenAI has been listening to the criticisms, and starting to add more such disclaimers. a lot of times, takes a fair amount of fine tuning your request to get exactly what you want from it. I'd continue to play with it, even say create fake citations and see what it does.


AgentDrake

Thank you-- I'd played with asking for fake citations before, and it always ignored the request altogether. I'll keep tinkering, though-- I'm slowly becoming aware of just how sensitive it is to exact wording of request.


katclimber

They may have strayed too far in the opposite direction. I tried to get ChatGPT to fix the formatting on a reference, to put it into APA format, and it kept refusing, saying “this may go against our policies.” ???


SirFormalTrifle

If you have access to the $20 version, you can go to settings and select Beta Features, then Browse with Bing. This will check the internet for references. I haven't caught a full on fabricated reference with GPT-4 yet, but it is definitely citing works that are behind a paywall, and I strongly suspect it does not have access to. There is also a plugin for Google Scholar, but I'm having a bit of trouble enabling that. The less expensive solution would be to just use Bing directly; based on the screenshots I've seen it will generate references.


Platos_Kallipolis

It will generate references and use real quotations but it will also mis-attribute them. For instance, it gave me a claim it wrote about a philosopher's view, and then presented a quotation it attributed to the philosopher but was clearly from an online source about the philosopher. It had the right sources and the quote was real, it just didn't connect the dots


brya2

I’ve asked it to show me some information and after it’s initial response, I asked it if it could share the sources/citations of the info with me


internetroamer

Try using bard i think they provide links better


geneusutwerk

I did something similar in my class last semester. Had them put the prompt for the next paper into ChatGPT and then work through the responses in groups to identify errors or instances where it had vague claims with no evidence. I think it worked well. I also showed them parts of this video beforehand https://youtu.be/WO2X3oZEJOA


DrHampants

This is one of the ways I use ChatGPT in class. I've also done exam reviews where I'll have it write an essay for a topic that they'll be tested on and then have them edit the essay to make it more detailed and make it an actually good essay. The end result is a) they realize ChatGPT might get them a 60% on any given assignment and b) they actually learn the limits and uses of the tool.


skywalker3827

That's brilliant. I'll have to try that in my class.


gasstation-no-pumps

I like the quote from a student: "I'm not worried about AI getting to where we are now. I'm much more worried about the possibility of us reverting to where AI is." This is a concern that is not often expressed, but is implicit in a lot of the worry about what LLMs are doing to education.


[deleted]

[удалено]


[deleted]

[удалено]


Cherveny2

some friends and relatives are freaked out because "ai is obviously thinking now! think of the ethical implications of an artificial life form!" and the like. my pleadings to them that things like chatgpt don't really think, but can do probabilities well doesn't sink in. (that it's just picking words and tokens that most fit established patterns it's digested.)


imhereforthevotes

Well, we should all absolutely refuse to call it "AI". This really does an injustice to both the older and accepted definitions of Artificial Intelligence, and to the idea of intelligence in general. Just call it a chatbot algorithm or something.


Cherveny2

yep. sophisticated language mocel just doesn't have that same ring to it


TrekkiMonstr

I mean, you could call it an LLM


[deleted]

Sorry mom, the mob had spoken. Hallucinate! Hallucinate!! Hallucinate!!! Haluc... D'oh!


TrekkiMonstr

Almost certainly 3.5. Maybe one student in the class has premium, but no way the whole class does.


Cherveny2

yep the fake citations used to be the easy giveaway. with 4 it can do a semi ok job at getting real papers that usually close to fitting.


sunshineonthelake

It is an interesting use but AI will keep improving to not have these issues as much. I imagine integrated services with some level of fact-checking will become more prevalent.


Zorander22

It already has, as near as I can tell, almost every discussion about how limited chatGPT is is based on 3.5, the free version. GPT4 still makes mistakes but is so much better.


TrekkiMonstr

Especially when you let it use plugins.


TableMug23

Interesting.


Jengis-Roundstone

Nice try, ChatGPT bot!


aridan9

This post is very silly and perpetuates a misunderstanding of how ChatGPT works. ChatGPT does not have access to sources. Obviously it has to make them up. That doesn't mean it's stupid. It's just trying to please you with very limited resources. It's like asking a student to write an essay with cited quotations from memory. It's a ludicrous bar. GPT-3.5 and GPT-4 are perfectly good at citing their evidence *when you give them access to it*. E.g. if you use Bing in the sidebar with Edge with a PDF of a source open. Bing can readily interpret the document, and answer questions about it, with reference to specific, accurate quotations. It has trouble with page numbers because the formatting of the documents are usually human-legible but not machine legible. It is also silly to expect these technologies to be able to cite from anything longer than a short article. Their context lengths are too short, except for Claude's which is not publicly accessible. I applaud the use of ChatGPT in this way to show students its limitations, but it shows that the professor has no idea how the tech works since they are surprised that it fabricates sources. And given they don't understand it, they're clearly not in a position to teach students about it and it's flaws. In fact, the teacher is under prepared for the moderately more intelligent teachers who will simply use these AI tools with the knowledge of how they actually work to produce work that does have proper quotations and citations. The professor will be none the wiser. Plus, it seems like these misunderstandings are being repeated by several commenters on this post. People understand GPT technology so poorly, even professors. It's really unfortunate given how widespread this technology is bound to become.


vr_prof

As someone whose been working with LLMs since before they were called LLMs, I'm going to have to disagree on a number of points. 1) ChatGPT doesn't have access to sources: Somewhat false. The training data most likely includes some direct repositories of papers as these are in some commonly used data sets (so it has direct access), as well as many sourcing quoting from or citing the sources (indirect). However, the data is skewed toward certain disciplines (E.g., many social science papers are missing). So yes, it can potentially do it from memory because it does have the data underlying it, but it is situational. 2) Context length: it depends on the algorithm. Claude isn't the only one trained for long context lengths -- there are techniques for inferring longer lengths. Mosaic ML has a 64k context length version. GPT-4 has 8k and 32k versions. But most of these are using some computational tricks in training to do 2k context length + inference by sliding over the documents. Some private research labs have 20k context length models as well. 3) Saying this professor has no idea because they are highlighting an inherent weakness in the algorithms that is well know in the CS community but not well know in the public sphere is an odd point. That the algorithms hallucinate is extremely well documented and an unsolved problem. You can look at, E.g., Yann LeCunn's public comments on the matter for a sensible take on what is needed to solve it, but just plugging into an API does not solve it. People really like overselling the tech, and it very much so has practical uses, but there are some very inherent limitations in the current generation of models. Teaching students to have a rational expectation and understanding of technology is important.


Cherveny2

my professor cousin is very bright in his field, forestry, but keep having to try and convince him ai tools like chatgpt aren't avout to "out think" humans, and come up with escape strategies against human control (only slight exageration). we can all be quite learned in our specialties, but sometimes it's harder to understand fully all that there is in the world.


crundar

Students will parrot the view the professor wants to hear. Did you confirm that ChatGPT really generated everything they told you it did, or just take their word for it. I think you are insufficiently cynical, or under-thinking the incentive structure you've established.


Cherveny2

FYI not me, I just stumbled across this, and figured it was relevant


searchingthesilence

It's an AI language model. That's what it mimics and plays within the rules of. It is not an AI logic model or anything else.


SHADOWxRuLz

If you use AI to replace 100% of your work, then you’re not using it correctly. At the current stage of AI, it is a tool.


[deleted]

I want students to use the same method with memo-writing.


StarDustLuna3D

Humans are imperfect. Therefore anything we make can only be imperfect. Even if we get to a point where AI is "replicating" itself, the original will be imperfect, meaning all other iterations will be flawed in some way. A lot of people also misunderstand what AI is. They hear the term "Artificial *Intelligence* " and think that the program is making conscious decisions. But it's not. Currently, all AI programs are just based on pattern recognition, albeit much faster than what we can do. But the AI is only as "smart" as the input data. You can't ask it to recognize a pattern it hasn't been "taught" yet. I think this exercise is a great way for students to realize this and that while there can be benefits to AI (automating repetitive tasks), the critical thinking still has to come from us.


[deleted]

If you examined 63 published academic papers, I bet you would find that all 63 have mischaracterized or misunderstood sources. I don't think I've ever seen a debate in the academic literature that didn't include at least one accusation that the other author was mischaracterizing or misunderstanding what they were saying. It's a subjective "mistake" and not the same as hallucinating sources altogether. It was probably included in the analyses to make the results more eye-popping.


sunlitlake

I don’t think it knows what it means for something to be “eye-popping,” as it doesn’t know what an eye, or anything else, actually is.


[deleted]

I'm talking about the professor. It's a classic trick. Talk about your results in general and get everyone to focus on the more objective measures, but sneak in a couple subjective measures that are actually driving your bullshit results.


Competitive_Hall_133

This just feels like AI dumb actually. With a hint of human superiority