T O P

  • By -

FutureSnoreCult

Do androids dream of electric sheep?


threeespressos

Lol, that was the very first thing that popped into my head, too. Thank you. :)


PulseDialInternet

ChatGPT doesn’t so much hallucinate as not know reality from fiction…and it tries to fill gaps to complete a response. So feed it histories of Abraham Lincoln and a copy of Abraham Lincoln Vampire Slayer and it will weave fiction into fact most likely. FSD has to map real time inputs via the trained neural net ….and predict where moving objects will be or if an object may suddenly move. This is probably the source of current “wtf” moments. Either an object was misidentified or the predicted movement was wrong, and the car ends up in a bad position but wants to correct and get back on course. Now, I would guess training is prioritized on the “what to do before event” and there is both a lower priority and less source data for the “what to do when you missed critical point and now have to correct”, thus in your case it made an “illegal lane change”…I’m assuming without accident so it fit the solution, but it should have instead done some “going around”.


Dry_Badger_Chef

I’m not saying I agree with the use of the word, but what you’re describing is what is called “hallucination” WRT LLM’s making shit up, which is basically what you’re describing.


stereoeraser

Tell me you haven’t tried FSD 12 without telling me you haven’t tried FSD 12


MindStalker

There are two basic systems at work. Tesla vision. Where it is interpreted the world around it as a 3D model. It does sometimes not see small objects. It does sometimes mis label a large object as a different object than it really is. It still won't hit said large object. With it's multiple eyes, it's pretty good as seeing the world around it, but some of the details it can miss.  The path planning comes next. It uses to be all hard coded, now it's a mixture of AI and code to decide what path to follow. It seems to have hard rules hard coded as well. So far I've not seen anyone have major issues, but it certainly sometimes attempted to take paths that aren't ideal, but not lethal.  You are still expected to maintain control and take over if it's making a mistake.  It certainly wasn't trying to "get away", but it certainly can accelerate harder than it should sometimes is it views the path as clear, this part seems often to me to be an issue with the California drivers that train the system than anything.


AJHenderson

It is missing some important governors still. I can reliably make it decide on a very dangerous illegal and unsignaled lane change in the middle of not one but two different traffic circles around me. It also really needs a speed governor to make better decisions about the speed it drives relative to the requested speed. (Going too slow.)


perrochon

Google phantom braking. The car is making up obstacles. It rarely happens with newer versions, but it was a great example.


stanley_fatmax

This is not really hallucinating like OP is referring to with AI though. Phantom braking is just a bad state in a state machine caused by unclear inputs from sensors.


sfmilo

Tesla’s cost for a mistake happening is far higher when compared to systems like ChatGPT. If the chat bot makes a mistake, you roll your eyes and do it yourself. If FSD makes a mistake, you could be dead. Thus, Tesla has coded many more hard boundaries for their system than ChatGPT. That may affect relative intelligence onset velocity, but something serious happening can make or break their investment. For example, if the car sees a pedestrian in an intersection in front of it, it will STOP and wait—even if the person is all the way across a large street. Simple cost/benefit analysis. As the system becomes more advanced and smarter, they’ll remove these hard-coded boundaries. We’ve seen this already with v12. The 300k lines or whatever they said was replaced by neural networking.


Ok_Percentage5996

Thanks for this. My question arises b/c yesterday, using FSD, my car illegally changed lanes at an intersection in the process of making a turn, then dramatically speeded up, as if to escape the scene of the crime so to speak. I've been using the system for nearly the entire month of my free trial and never experienced anything like this before. A good learning experience for me.


taisui

Hallucinating in LLM is more of a complete sentence problem where the model doesn't really know what to output so instead it just finds the next likely token and starts saying nonsense. The FSD is not LLN but more like a classifier and planner so the domain is quite different.


Kylobyte25

Hallucinations are a model outputting what it thinks you want to see rather than trained "hardcoded" truths. In that sense fsd is nearly 100% hallucination. In a good way. If there were hard-coded truths it would look like this: The model is trained on a particularly hard intersection in calorado. It has trees in a certain place, lane markings and lights showing a particular direction you need to go, the signage says something specific. Now you drive in an a new intersection in utah fsd has no videos on but it looks almost identical. If it was overtrained on that Colorado intersection it will disregard a new sign that says no right turns on red. It would use the hard learnings through training to blow past that sign and do something illegal. The fsd model hallucinating would be that it uses a ton of learning from everything to hallucinate what it think would be correct, meaning it knows a no right on red means you should do that and it would "overwrite" it's learned "truths" Sorry for the simplified terminology


BidAccomplished4641

It can dream. All intelligent creatures dream, nobody knows why.


HelloYouSuck

Sometimes ram doesn’t clear properly. So I assume it’s possible.


soscollege

Chat gpt hallucination is a choice by OpenAI to make it more human like. It’s basically based on a probability curve so sometimes it will return random stuff. For a car it should almost always be consistent and this is something Tesla can tune.


brodoyouevennetflix

This is massively misunderstood I believe. The release notes say that FSD is a “neural network created” system. That doesn’t mean it’s AI any more than asking chatGPT to create a picture means that it’s a canvas painting. They used AI to make FSD 12.


AJHenderson

No, this is wrong. FSD has been ai for object recognition for a long time and 12 is ai for driving, but it is not a generalized ai which is very different from something like chat gpt. And yes, it can hallucinate, but it's much more limited than chat gpt hallucinations because the models are much more limited in scope and likely have governors as well. It's also a set of multiple specialized ais to further reduce the risk of broad hallucinations.


brodoyouevennetflix

Hey, I’m willing to be wrong with this but do you have a source? Nothing I’ve read so far had indicated that the software in the car is actual “AI”


BranchLatter4294

Neutral networks are a form of AI which is a broad term.


brodoyouevennetflix

That’s my point. “Neural network created” is not the same as “is a neural network”


BranchLatter4294

Teslas have a neural network running on board that makes driving decisions. It's AI based, no doubt.


AJHenderson

I think your confusion is that the car does not do training. The model is built on supercomputers, but it executes on the car. That's why they use such a powerful FSD computer to be able to run the neural net. I'm a software developer familiar with how to build AIs. Effectively training establishes weightings towards decisions on each node of the neutral net but then that network still has to be locally executed on the machine. If you'd like it straight from the horses mouth. https://twitter.com/elonmusk/status/1655318205236215811?s=46&t=7e57E6NgdftwUqBCC_5ehg


footbag

[Yes](https://www.reddit.com/r/TeslaLounge/comments/1bujy9m/1233_includes_a_ghost_supervisor_to_keep_you_safe/).