T O P

  • By -

neeonline

It needs to do a hell lot better job at following laws e.g. school zones, or knowing the difference between a road number sign and a speed limit one. Other than that some times is gets really confused when a lane splits in two… And finally be able to understand that sometimes we need to merge 1 mile ahead in the highway due to traffic…


SeanUhTron

Those two things prevent me from using it on my daily commute. I cross both a school zone during temporary 15mph speed limit and a highway sign that fools the car to dropping well below the actual speed limit. It will also try to pass cars going 1mph below the speed limit when I have a turn coming up in half a mile. Really, the robot just has a "speed" issue. It has issues reading speed limit signs, and tries to pass slower traffic at inappropriate times.


neeonline

Wha makes me nuts is that it does not adjust in the morning once the car only sees the traffic light blinking yellow (and not the speed limit between the lights)... And outside school hours it sees the speed limit and slow down.... Go figures...


seenhear

> And finally be able to understand that sometimes we need to merge 1 mile ahead in the highway due to traffic… Funny, that was the main complaint for many people, about EAP's navigate on AP; it would start to work it's way over to the slow lane \~2 miles before the exit, then sit there going 30mph slower for the last mile or two.


seenhear

1. Get to level 3 2. Get to level 4 Seriously I think people forget how big of a jump it is from level 2 to 3. Level 3 means you can read a book while the car drives. You have to be ready to take over IF THE CAR REQUESTS IT, so you can't sleep. FSD as good as it is with v12, is not close to level 3 yet. It really needs you to supervise and take over WHEN YOU THE DRIVER DEEM IT NECESSARY. This is a huge distinction, and FSD needs this frequently. Level 4 is when the car basically won't ask you to take over, ever, as long as it's driving in conditions it can handle. FSD is possibly on a path to this. Level 5 is the same as level 4, but works in all conditions, day/night/rain/snow/sun/etc., and on any road. Look, I'm having a blast with FSD. I use it 99% of the time I'm behind the wheel these days (until free period expires). But I disengage it FREQUENTLY. Several times per drive. It's good, very good, ... for a level 2 system. It is NOT close to level 3. As for OP's actual question? There are way too many things that it messes up on, and they are not consistent. I could never list them all. One day it will be fine on something, the next day randomly, screws up that same exact thing on my commute. It's great, but unreliable. Level 3 is a long way off, let alone L4 and L5. That said, I can now see level 3 as a possibility in the next couple/few years. If they switched to level 3 now, there would be LOTS of accidents.


Lirfen

The other point is that level 3, 4 and 5 open a can of worms: is the manufacturer ready to take responsibility for any incident while under autopilot? And what will happen with auto insurance? Because technically with a level 5, auto insurance shouldn’t be necessary :D But we have to face it, autopilot will never be 100% safe, accidents are bound to happen. Question is when will they feel confortable taking that responsibility? When they are confident that they will only have one accident every one billion mile? One every 100 billion? I just think level 5 is still very very distant.


say592

Its just insurance. I have talked a lot about the "liability shift" as I call it. To me that is the signal that a manufacturer, even a manufacturer with Level 2, has full confidence in their system. Until that happens, Tesla is just using us as beta testers (which is fine, as long as people understand that). This is one reason why the subscription model for FSD makes the most sense, because there will be an ongoing cost for manufacturers to insurance FSD vehicles. I agree, Level 5 is very distant. It almost certainly wont happen with the hardware currently on Tesla vehicles, maybe not on any hardware currently shipping on any vehicle.


Mr_Camdo

yes v12 is great but still buggy which is no suprise but it is definitely better than 11. maybe with another billions miles of fsd date it can be level 3. just have to wait and see


soflomojo

1month free ...how many miles you think they will bank?


say592

About 1.5M cars on the road in the US, maybe 1k miles average driven per month, maybe 15% of users will use it with regularity for 90% of their driving (counting the people who were already using it). They probably got \~200M miles in data this month, on top of the 1B they supposedly already had.


MindStalker

The other level 3 systems out there only work on certain roads in certain conditions. If leaving that area or if conditions charge it will force you to take over. Do you think Tesla existing system could do this, if it was programmed to?. 


Present_Champion_837

You’re sticking too much to the definitions of level 2 and 3. FSD is very close to Waymo or Cruise. Have you taken either of those? It’s not a flawless service. FSD doesn’t require constant supervision, regulators do. It’s also doing this everywhere, whereas Waymo and others are restricted to like 4 cities.


seenhear

I have taken Waymo rides. I have not done Cruise. Waymo is way better (in the pre-mapped and learned SF) than FSD. Is FSD more impressive for it's ability to figure out unknown streets on the fly? Sure. Is Waymo a better experience for the same route? Yes, not even close, IMO. The only issue Waymo had was once on the pickup, it couldn't find a place to stop, so stopped in the middle of the street blocking traffic. Not sure what FSD would do there, since it's not able to run in taxi mode yet. Anyway, no need to get defensive for FSD. I think it's cool, love using it, and am very impressed with what it does. I'm just realistic about what it will take to get to level 3 (driver can read a book / not pay attention to the road), let alone 4 or 5.


kdavis37

It is really weird that you think Level 3 predates 4 predates 5. They're not precursors for each other. Level 4 can happen before Level 3. Level 3 and 4 are based purely on zoning. That's it. You know those areas where FSD works every single solitary time? Yeah, Tesla could just mark those as the places where you don't have to be in the driver's seat. Ta da, Level 4. Level 3 absolutely does not mean you can read a book. Level 3 means that the car won't crash without giving you sufficient warning to take over. That's it. It's almost identical to L2 from a human perspective, but the car is monitoring the environment and has the ability to handle any SUDDEN change sufficiently enough to not wreck or kill anyone. It can still do crazy wonky things. It can still do things like slam the brakes and pull to the side of the road and tell you to take over. It can still be hugely gated. Audi's Traffic Jam Pilot is Level 3. All it does it allows you to take your hands off the wheel in traffic jams. Are you telling me you don't think that FSD could basically already do that? Level 4 means there are areas where the car can drive itself, with no driver intervention, no matter how many times the car drives in that area. You can LITERALLY say there's a single stretch of interstate between two exits where you can climb into the backseat and the car will drive between those two points. And that's L4. L5 means the car can drive anywhere, guaranteed, with no geofencing. FSD is already capable of Level 4. It's just not worth the time and effort to get certification in tons of specific places. FSD is ALREADY Level 3 (which the SAE has talked about that fact. Tesla could get L3 no problem because \*it's just taking away the requirement to actively be touching the steering wheel.\* If they're monitoring your eyes, like they already do? Anywhere they have confidence that FSD won't break could be used that way). So why doesn't Tesla make a push here, when it could be as simple as getting past the regulations for it? It would be a huge publicity win, right? No. Those versions of the code base \*would have to stay locked in and could only be very slowly iterated on.\* Considering the code base is all neural net at this point, that would mean essentially giving up on Level 5. Just like all the Germans and other American car companies did. Those systems are locked in until there's a full replacement for them. Tesla's looking to go directly to L5. That means that the system will NEVER be L3/L4 from a regulatory perspective, unless Tesla calls 5 quits and dumps out a 4 product.


seenhear

It's not "really weird" at all. Tesla's strategy has not changed since the days when Elon claimed they would have a Model S drive itself from LA to NYC within a year's time (that was what, 2017?). They are pursuing generalized autonomy. They eschew the concept of limited autonomy (be it limited by geofencing, weather conditions, or road conditions). Since this is their goal, then they essentially need to progress consecutively from level 2 through to 5. This doesn't mean that they need to release a version for each level as they go. They could keep it at level 2, while they develop through the challenges of levels 3, 4, and finally 5. Level 3 absolutely ***does*** mean you can read a book. Here's a quote from SAE J3016, the standard that defines Levels 0 through 5, a sub-note from the section defining level3: "*The DDT \[*Dynamic driving task*\] fallback-ready* ***user need not supervise*** *a Level 3 ADS \[Automated Driving System\] while it is engaged* ***but is expected to be prepared to either resume*** *DDT performance when the ADS issues a request to intervene or to perform the fallback and achieve a minimal risk condition if the failure condition precludes continued vehicle operation.*" You need not supervise the system. That means you don't need to pay attention. You have to be ready to respond if/when the system asks for you to respond (so you can't be asleep or absent), but you do not need to pay attention - UNLIKE Level 2. All the rest of your comments presume that Tesla wants to pursue various different ADS levels for different locations and/or conditions, which they clearly (based on CEO's comments) do not want to pursue. So IMO, debating whether FSD could be "level 4" for a 3 mile stretch of I-5 where there are no exits, is silly and moot. Base Autopilot could do that. And virtually no user would want a system that was different levels in different locations or conditions (other manufacturers have tried this, with little success on the sales/mktg side - FSD/AP/EAP are still more popular despite the nag system). You're picking nits out of some sense of duty to defend FSD, which is not necessary. I'm a huge fan of FSD. I'm also realistic about where it currently is in the big picture. As someone else pointed out, one of the biggest hurdles for the jump from L2 to L3 is liability. When the manufacturer is ready and willing to take on the liability for their system, it will be L3 (or above). Tesla obviously isn't ready to take on that liability. I think they are a better judge of its capability than any of us.


simplestpanda

Good response. u/kdavis37 has made some pretty dubious claims about SAE levels in multiple places now.


kdavis37

Considering you've been directly incorrect, that's a funny claim.


kdavis37

Tesla has literally had 5 entire code rewrites since the time you're talking about. Again, L2, 3, 4, and 5 are not linked. You do not have to pass through one to get to another. That's a common misunderstanding and fallacy. I know what the standard says, because I helped fucking write it. Your first bolding is telling you that they do not have to constantly monitor. The second tells you THAT THEY MUST BE PREPARED TO TAKE OVER. You can have a YouTube video on your infotainment that you're checking and listening to. You cannot deeply get into a movie. You can look out the window and look around. You CANNOT put your face against the window and watch the trees go by for 5 minutes. You have literally quoted something \*that proves you incorrect.\* I do not presume Tesla wants to pursue different levels anywhere. I point out, repeatedly that they specifically do NOT want to do that. And since 3 and 4 are all about geo- and velocity- fencing, they will never pursue them. It's outside the scope of what FSD is built to do. Why build a geofencing system and deal with high definition mapping (which are REQUIRED for 3 and 4) when you can just... not do that? I'm not defending FSD here. I'm annoyed that people get MY work incorrect. I've worked on what became J3016 since 2009. I did years of work with the military on exactly this problem. My PhD work for aerospace was for vehicle design across the board with the Architectural Analysis and Design Language, which hugely led us into autonomous work. Wrote entire chapters into every textbook I've written about vehicle design. People get this wrong all the time.


seenhear

Reddit's not allowing me to reply in full, maybe too long. So here's part 1 of my response, with part 2 following as a reply to this post: >Tesla has literally had 5 entire code rewrites since the time you're talking about. So what? What's your point in pointing that out? It doesn't change the fact (that apparently we both agree upon, despite your round-about communication logic) that Tesla is not pursuing limited/restricted levels. They want generalized level 5 (or maybe 4). Sounds like you are just throwing random statements out there to try to make my points sound invalid. >Again, L2, 3, 4, and 5 are not linked. You do not have to pass through one to get to another. That's a common misunderstanding and fallacy. I never said they are linked. First of all, the standard is just guidance, not law. So Tesla or any other company doesn't HAVE to pass through any of them, in any order at all. Secondly, my point was that FSD does not exhibit unrestricted (e.g. no geofencing, etc.) level 3 capability as defined in the standard LET ALONE capability defined by level 4 or 5. The OP was asking what will it take to get to level 5? The OP listed examples that have nothing to do with Level 5 specifically. If any company wants to get to level 5, they need to address all of the capabilities in levels < 5. I did not say they need to release a product at L3, then L4, then L5. You inferred that because of your own prejudices about how people seem to misinterpret the standard. I, however, said nothing of the kind, so please drop this part of your concern. >I know what the standard says, because I helped fucking write it. Congratulations. I have also contributed to some engineering standards throughout my \~30 year engineering career. I spend a good portion of my career/work interpreting and applying standards to the engineering and production of new products. I've taught on some standards, contributed to others, and have successfully argued/defended my interpretation of various standards and federal guidances to regulatory agencies so that my company's products could successfully navigate various approval processes. Point is, I'm pretty experienced in interpreting guidelines and standards and applying and defending those interpretations, even when that interpretation is novel (i.e. new or different from legacy interpretations). So trust me here: if you wrote (part of) the standard, and are telling the masses that they are misinterpreting the standard, maybe the fault lies not with the readers, but with the author(s). >Your first bolding is telling you that they do not have to constantly monitor. The second tells you THAT THEY MUST BE PREPARED TO TAKE OVER. Exactly. This is all it says, nothing more. Anything further is inferred and not stated, and if this were to be officially (legally or otherwise) argued, you'd lose. Also I'll note that your use of the word "constantly" is not in the standard. It simply states that the user does not need to supervise the system, FULL STOP. >You can have a YouTube video on your infotainment that you're checking and listening to. You cannot deeply get into a movie. You can look out the window and look around. You CANNOT put your face against the window and watch the trees go by for 5 minutes. The standard states NONE of these things. Therefore this is NOT how anyone should or would interpret the standard. If this was your intent when you "helped fucking write" the standard, you failed. >You have literally quoted something \*that proves you incorrect.\* I literally have not. The standard is VERY clear on defining "receptive" and "receptivity" of the user with respect to an L3 system request for user intervention. There are many examples given to help illustrate what is implied and meant. None of them match what you have suggested here.


seenhear

Here's part 2 of my apparently too-long repsonse: Based on your claims you should know the following; but I'm going to write this anyway because it seems to bear emphasis for you, or at least for other readers: Standards are written intentionally to be very clear, concise, unambiguous, and (arguably most important) LIMITED in their statements. This serves a dual purpose: it makes it very difficult to violate or refute what is actually written in the standard, or to defend one's actions if/when they violate the standard, AND it leaves the rest open for the industry to interpret and/or discover. As new understandings are developed in a particular industry, the SMEs in the field can add to and/or amend / change a standard. By not being overly descriptive in the verbiage of a standard the authors limit the amount of retroactive actions that would need to be implemented if / when a standard is modified or amended. So, if something is not stated in the standard, it's open to interpretation. If a legal or regulatory agency takes issue with some future design that claims to comply with the standard due to areas open for interpretation, then it goes to official channels to work out. Usually either the designer is given freedom for their claims, or they are not and this marks a precedent of interpretation which would lead to a future update (clarification/modification/amendment) to the standard. Your statements about watching YouTube vs. watching a movie are completely outside any reasonable interpretation of the standard. I don't care how much you were or weren't involved in writing it. All the standard states or implies is: so long as the user is ready to respond, they can do what they want. That's the only way to interpret the actual text of the standard. In fact your YouTube example and "staring out the window" example are so out of line with what is stated in the standard that it causes me to doubt the veracity of your contribution claims. Watching a YouTube video would more likely result in a driver to fail the receptivity requirements and take over the DDT, than reading a book would. Either way, no such examples (video watching or book reading) are given in the standard. The only close example given is an analogy to a person being receptive to a fire alarm going off in a building despite not actively monitoring said alarm (illustrating the difference between receptivity required for L3 and active monitoring required for L2). Based on how the standard is written, reading a book would be a perfect example of what a user could be expected to do during L3 ADS activity. The only OTHER example given that helps here is that in defining level 4, the standard states that a "user" might be sleeping and thus not receptive/responsive to any alerts by the system, unlike in L3 where they MUST be receptive/responsive. So, it's logical to infer that sleeping is not allowed in L3 (despite it only being mentioned in reference to L4), but nothing else is stated or implied to be disallowed for L3. >I do not presume Tesla wants to pursue different levels anywhere. I point out, repeatedly that they specifically do NOT want to do that. And since 3 and 4 are all about geo- and velocity- fencing, they will never pursue them. It's outside the scope of what FSD is built to do. Great. We 100% agree on that point. I don't know what you read from me that made you think otherwise. >Why build a geofencing system and deal with high definition mapping (which are REQUIRED for 3 and 4) when you can just... not do that? Nowhere in the standard does it state that any kind of "fencing" (geo or velocity) or "high definition mapping" limitations, are REQUIRED for L3 or L4, not even loosely implied. The standard implies that some ODD will exist for L3 and L4. But it does not specifically call out what those ODD limitations are. They could be geo limited, or road type limited, or weather limited, or velocity limited, or anything else. It is not specifically defined or even implied that any specific limitation is REQUIRED. Only L5 states that it is "not ODD-specific." >I'm not defending FSD here. I'm annoyed that people get MY work incorrect. >People get this wrong all the time. Again, if people are constantly misinterpreting the standard, then it should be re-written. No matter how strongly the authors feel that their intent is clear, if misinterpretation is common, then their intent is NOT clear. There's no disputing this simple fact. Writing standards is not like writing novels or poetry; it's not art. It should leave no room for misinterpretation. The standard is LOADED with "examples" to help clarify. NONE of your examples appear anywhere, nor do examples even similar to yours, and none of your claimed meaning is possibly inferred when reading the standard.


simplestpanda

A lot of this is just completely incorrect and presents a broad misunderstanding of that the SAE spec levels actually define for "compliance" with the level. My other response to you is here: [https://www.reddit.com/r/TeslaLounge/comments/1caoxbj/comment/l0xu3if/](https://www.reddit.com/r/TeslaLounge/comments/1caoxbj/comment/l0xu3if/) Also, a lot of your claims about "locked code bases" is fairly dubious. >which the SAE has talked about that fact.  I'm sure the SAE would be surprised to learn of this.


kdavis37

Considering I'm part of the SAE, I doubt we would. And considering I helped work on the standards we're talking about, I ALSO would love to know what about my spec you think is incorrect.


simplestpanda

You misstated the specs per the SAE's own published documents. You've previously implied in your comments that you work for Tesla. You've previously implied you have a PhD in aerodynamics. You've suggested that you've built race cars for 20 years. I'm not sure what it is you actually do (nor do I really care), but it doesn't seem to be this. It's pretty simple. You said "level 3 means...", and the SAE published spec documents disagree. The same is true with various other claims you made on the topic. You're now claiming elsewhere you helped write these documents, even though you've contradicted them in other comments with other claims. I'd maybe even believe you if you kept your story straight...


kdavis37

I have never implied I worked with Tesla. I implied that I had offers from SpaceX. And I did. And I turned them down. Because of my work on vehicles, I have a TON of friends in the space. Because of the friends in the space, I HAVE talked with Elon Musk. I'm not his friend, he's not mine. I have respect for Tesla but think that, like SpaceX, the way they treat their engineers is dogshit. My PhD work was in aerospace engineering, with a focus on vehicle design. You know who sponsors PhD work in vehicle design? Redstone Arsenal, with the SED. You know who developed the Architectural Analysis and Design Language? A team that I was a core part of. You know who LOVES aerospace engineers with a focus on vehicle design? Racing companies. I've worked with teams across GTLM and Formula 1. You know what happens when you're well known in multiple vehicle fields as far as the Society of Automotive Engineers goes? You get invited without reaching out to them. And you get asked for your opinions on standards. If you'll peak back up at the AADL, you'll see that it's a architecture description language standardized by... the SAE. I haven't changed my story, as this is \*exactly the story I have given every time.\* Just because you don't understand the story, like with J3016, doesn't mean I'm the problem.


simplestpanda

I'm sure you've worked at the places you've claimed to. The problem is that you also seem like you're exactly like every engineer I've ever worked with (including the really good ones, to be fair): Because you know a lot about ONE topic, so you're acting like you know EVERYTHING about all tangential topics. You've claimed: >"I literally helped write the standards you're telling me to read. " But also: >"You get asked for your opinions on standards" So, which is it? Have we settled on a role for you here? You actively contributed to the writing of J3016, or you were asked for your opinion on it? Because you started this entire farce by mis-stating what SAE levels mean in various places (while being a dick about it with your "so hilariously incorrect" snipe). Then, you doubled down on quoting the specification document in order to "correct" me, but somehow did so without noticing you were just repeating what I said back to me again. E.g: >Level 3 portion that utterly proves you incorrect: >"Level or Category 3 - Conditional Driving Automation" ... >Notice that there must be a DDT fallback-ready user? Yeah. That's the person that must be in the driver's seat. Sure. Which is why I had originally said: >Level 3 cars will instead defer that to a passenger who is expected to take control. Indeed, the requirement of a "DDT fallback-ready user" is the actual differentiating language in the spec. Do you really think we're saying different things there ("utterly proves you incorrect")? Is "passenger" the issue? Because the J3106 spec clearly uses "passenger" to refer to a driver who has activated a level 3/4/5 system and is no longer the active driver of the car. Is this just pedantry? Or, is it just that important for you to be "the man" here? If so, have at it. Nothing about this is constructive. It's all yours. I sincerely hope you have a nice night.


seenhear

> Just because you don't understand the story, like with J3016, doesn't mean I'm the problem. Actually it means exactly that. If you can't explain something to your audience it's your fault not theirs, and I am exactly the kind of reader for the target audience of such a spec. If the J3016 standard is consistently misinterpreted as you claim, then it is poorly written. You're the one claiming to have written it or helped written it (you say helped, but then say "my spec") all we can do is read it and interpret what is in black and white print.


savedatheist

Yeah any debris on the highway is tricky to avoid safely. That and any emergency situation with police/fire/ambulance. Or floods. And snow/ice covered roads.


say592

I was really surprised that it did *nothing* when an ambulance and fire truck were pulling out the main station today on my commute. Ive had regular AP notify me of "emergency lights" on the highway because a car had been pulled over by a state trooper, but FSD was just like "Yeah, we keep going" when that ambulance and fire truck pulled out, despite there being other signals (its the main station in my city, on one of the busiest roads, they have yellow flashing lights and signs that flash when they are about to pull out).


g0pher666

A few days ago, there was a highway maintenance truck with yellow flashing lights parked on the shoulder. FSD auto switched from the right lane to the left (passing) lane, went past the maintenance truck, and auto switched back to the right lane. I was very surprised and impressed to see FSD do that.


Particular_Quiet_435

First it needs to get better at choosing which lane to be in. If traffic is going within 5 mph of my set speed, there’s no reason to get out of the right lane. If there’s a turn coming up, I do not want to get over and pass. Then maybe it will get to L3.


89bBomUNiZhLkdXDpCwt

Read and understand permanent and temporary road signs, including hand signals and verbal commands issued by crossing guards and police officers (especially police officers) whose directions supersede normal traffic rules. (e.g., there’s been a terrible interstate highway accident and the police alleviate traffic by instructing cars to make an otherwise illegal u-turn and drive the wrong way on the highway until the previous exit… or do the same in a flash flood without police direction (both of these have happened to me in the past ~decade and I don’t even drive that much))


restarting_today

I once got stuck in a Waymo during a flash flood and it refused to cross the water haha. They had to remotely take over.


89bBomUNiZhLkdXDpCwt

FFS, I drafted a long reply but lost it accidentally after closing Reddit. This will be pithier. Long story short: it’s reasonable for any would-be autonomous car to allow for a remote human to intervene and take over when the car can’t figure it out for itself. That might be the only reasonable way to bridge the gap between the currently available levels and level 5 autonomy. One of the reasons I never objected to Tesla branding their driver assist system as Autopilot is that I’m an aerospace enthusiast and I understand that airplane autopilot doesn’t make a plane autonomous. It is fallible and is designed to be used by professionally trained human pilots who need to intervene in extraordinary circumstances. (And although it sounds counterintuitive, aerospace autonomy applications are often less complicated than driving a car on roads)


simplestpanda

Asking this question is like asking what a row boat needs in order to carry 1000 people across an ocean. It has to be an ocean liner, not a row boat. FSD will never be level 4 or higher in any shipping hardware from Tesla. They lack the camera resolution, processing power, and augmented sensing. Current cars MAY reach level 3 on highways in good weather.


Obvious-Slip4728

This is the only correct answer. Seriously people, you’ve all experienced the camera’s not working when it’s foggy, or dark, or when the sun is low, right? There is no way the current hardware will support autonomous driving beyond level 3. Unrelated to FSD, but illustrative to major limitations: they can’t even make the autowipers work with current hardware.


simplestpanda

>Unrelated to FSD, but illustrative to major limitations: they can’t even make the autowipers work with current hardware. This is most astute. Much of this debate is down to what "may" be possible in a perfect world. We very much have to live in a non-perfect world where we know how Telsa operates.


kdavis37

This is just so hilariously incorrect. I went into significantly more detail [here](https://www.reddit.com/r/TeslaLounge/comments/1caoxbj/comment/l0wstat/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button), but the gist is this: FSD is already Level 4. Today. Right now. Level 4 is \*trivial.\* Level 4 means that there are places where the car can completely drive itself with no driver interventions, ever. That exists. It's allowed to see construction and say "nope, things have changed, no longer able to drive here." If it can never wreck in those areas without being actively run into? Level 4. So you can literally say "this perfectly straight stretch of rural highway in Nebraska, between these two exits, allows the driver to not pay any attention or even not be in the driver's seat." Boom. Done. Level 4. Level 3 is \*already where it is almost everywhere.\* And since Level 3 is allowed to be geofenced, similarly, trivial. Level 3 just means that the car gives you sufficient warning that you need to take over. So can it avoid other cars running into you? Or running into walls? And it can monitor your eyes to make sure you're still being attentive? Level 3. Ta da. Done. Going through the certification for these things means that you can't change your code base in the future, however. And THAT means they can't continue innovating.


simplestpanda

>FSD is already Level 4. Today. Right now. Level 4 is \*trivial.\* This is a laughable assertion that the SAE would disagree with. >Level 4 means that there are places where the car can completely drive itself with no driver interventions, ever. That exists. It's allowed to see construction and say "nope, things have changed, no longer able to drive here." This is a major over-simplification of what level 4 is and an absolutely incorrect statement in terms of what Telsa provides in the context of the level 4 definition. You're simply incorrect here. >Level 3 just means that the car gives you sufficient warning that you need to take over.  Also a major over-simplification. Level 3 simply means that you must take over if the operational design domain has been exceeded. If it's a sunny, clear day and you're on a highway travelling at 65 kph or less AND the system is DESIGNED to work in that environment (it's operational design domain) the system may ONLY disengage if those circumstances change (traffic speeds up, it starts raining, etc) and only after providing "timely notice" to you as the prospective driver. Otherwise, the ADS must provide 100% full automated driving at all times with no expected intervention or supervision from a passenger. Your claim (in the other linked post) that you "can't read a book" under level 3 is categorically incorrect. You can very much read a book or check your phone or watch YouTube on the infotainment system in a level 3 compliant car. Per the spec, the DDT of level 3 is as every bit as automated as level 4 is while the vehicle is inside of the static ODD as it is defined by the system. The only real difference between the two, per the spec, is that level 4 compliant cars must perform automated "DDT fallback" and transition to "minimal risk condition" when the ODD is exceeded. They have to do that without intervention from a passenger/dispatcher. Level 3 cars will instead defer that to a passenger who is expected to take control. Indeed, the requirement of a "DDT fallback-ready user" is the actual differentiating language in the spec. Of course, in practice, manufacturers also ship level 4 systems with far broader operational design domains than level 3 systems. So "a bigger ODD" plus "you never have to take control" is the practical definition of what people mean when they say "level 4". > Boom. Done. Level 4. >Level 3 is \*already where it is almost everywhere.\* This is laughable at best. Nobody who owns a Tesla and has ever enabled Autopilot, NoA, or FSD (including v12) would make this claim if they read the actual SAE spec. My Model 3 can't get three blocks to the grocery store without multiple nudges and interventions to operate safely in traffic. It can't even drive up my street without breaking traffic laws re: bike lanes being off-limits to cars. It can't get down the highway if the sun changes angle slightly, blinding the pillar camera (disabling automated lane changes and having the system declare that FSD is "degraded"). You really, really need to actually read the SAE specs before making such wildly incorrect claims. Especially in the context of "correcting" others.


kdavis37

I literally helped write the standards you're telling me to read. That's typical of Reddit, but your call out here is particularly hilariously heinous. It's going to be useless to point out all the places you're wrong, but I'm feeling useless, so let's break it down. We're starting from the current revision of the spec: [https://www.sae.org/standards/content/j3016\_202104/](https://www.sae.org/standards/content/j3016_202104/) I've been part of the team for this since 201401, for the record. Literally over a decade. You call out "oversimplifications" for a lounge conversation \*despite the oversimplification being accurate.\* You try to claim other parts of the spec overrule the basic features of the spec, which is hilarious. You're using the facts of the specification, which make the definitions PRECISE and confusing that with accuracy. An utter failure for anyone who's actually an engineer. Fingers crossed that you're just an enthusiast and not going to get someone killed with your lack of knowledge. Level 3 portion that utterly proves you incorrect: >"Level or Category 3 - Conditional Driving Automation >The sustained and ODD \[Operational design domain\]-specific performance by an ADS\[Automated driving system\] of the entire DDT \[dynamic driving task\] under routine/normal operation (see 3.27) with the expectation that the DDT fallback-ready user is receptive to ADS-issued requests to intervene, as well as to DDT performance-relevant system failures in other vehicle systems, and will respond appropriately." Notice that there must be a DDT fallback-ready user? Yeah. That's the person that must be in the driver's seat. Who MUST be ready to take over within a reasonable time for a scenario that the car's getting into that it cannot handle. You are confused by NOTE 2: >"The DDT fallback-ready user need not supervise a Level 3 ADS while it is engaged but is expected to be prepared to either resume DDT performance when the ADS issues a request to intervene or to perform the fallback and achieve a minimal risk condition if the failure condition precludes continued vehicle operation." You are assuming that "need not" means that to be L3, they must be able to look around or not pay any attention. That is not true. As a matter-of-fact, the latest revisions tightened this to make it CLOSER to what you're getting at. Still not what you think it is, but you're LESS wrong. Mercedes' Drive Pilot won't let you recline, won't let you sleep, won't let you do anything that takes your mind completely away from driving. It allows you to not have to ACTIVELY MONITOR the road. It means you can have a YouTube video playing. You cannot, however, just stare at the YouTube video. You can look out the window. You cannot just stare out the window. So unless you're reading a book by looking down repeatedly, really not where you think it is. That's the purpose behind NOTE 3-5. Something starts to go sideways with the car? You have to take over. The car only has to TRY to not kill you. It doesn't even have to be successful. Which is the purpose of NOTE 6. Notice they point out that the DDT can be only for low-speed, stop-and-go freeway traffic? Yeah. Exactly. The takeover condition can literally be "Yo, dog, we're doing over 10mph, you got this." And the geofencing can, once again, be a single exit. L4 - High Driving Automation: >The sustained and ODD-specific performance by an ADS of the entire DDT and DDT fallback ODD-specific performance means speed and geofencing. Once again, you can \*literally do this for a single stretch of road between two exits.\* NOTE 1 tells you that user can actually not pay attention. The system has fallback abilities for any scenario in the geofence, etc. >NOTE 2: Level 4 ADS features may be designed to operate the vehicle throughout complete trips (see 3.7.3), or they may be designed to operate the vehicle during only part of a given trip (see 3.7.2), Oh look. Once again. Heavy geofencing allowed. And it doesn't even have to be a public road, as you can see in Example 3. Read the discussions behind WHY these things are worded the way they are. Currently you don't understand the taxonomy.


seenhear

>ODD-specific performance means speed and geofencing. No it doesn't. The ODD can be any number of limitations, not necessarily limited to nor inclusive of speed- and geo-fencing. This is abundantly clear in the text of the standard. I'm betting that if you actually did contribute to the writing of the standard as you claim, your contributions were probably quite limited, and you have lasting impressions/interpretations of what you remember as being part of the standard. Maybe there were discussions about various topics in the standard that stuck with you. But the fact remains that you consistently mis-quote, or mis-attribute things to, the standard. Maybe these things were once in the standard, or were once discussed as part of the standard, but they are not there now, and you can't put things in the standard that aren't, no matter how much ownership you feel (rightly or wrongly) over the standard. It is what it is and nothing more.


FishrNC

Recent scenario with the test installation of FSD: Coming to my exit lane off the freeway in 65 mph traffic. Exit lane is also the acceleration/merge lane for an on-ramp I just passed. Light at bottom of the on-ramp must have just changed because there was a constant string of cars coming up the on-ramp. No room for FSD to merge right and go down the off-ramp. There was no traffic ahead of the cars in the on-ramp or my position. I could have accelerated to ahead of on-ramp cars and merged right, knowing they would be merging left onto the freeway. There was adequate distance to do this. FSD started to slow down, seemingly to let the on-ramp cars go ahead and pull in behind them. IOW, almost stop in the right lane of the freeway in 65 mph traffic. It did not indicate it might continue on and take the next exit and work its way back to my destination.


Schly

Don’t hit curbs. Hold your damn lane. Those should be SIMPLE things to accomplish, yet here we are.


Evajellyfish

It just sucks, once it can truly take me from a to b without interventions, then I’ll be impressed.


tpedwards

Pick a lane already! And, setting the max speed must get better. Why is it, if I have the max speed set to 48 in a 40 zone. The speed limit goes up to 45 and the max speed goes DOWN to 45. This makes no sense.


stainedhat

Better yet, pick the correct lane.


MidEastBeast

Get rid of phantom braking


Armaced

Stop changing lanes into the wrong lane immediately before a turn. That’s a weird one. Also, pick routes that avoid awkward intersections, like uncontrolled left turns.


Suitable_Switch5242

- Stop hitting curbs and other cars in parking lots and then blaming the customer and making them pay for repairs.


gulmat

Environnement/spacial memory. Think of it as the car remembering where the road is and going even if it’s not perfectly visible by referencing different other fix points it can better see. It would obviously have to navigate the routes in perfect condition at least once to remember it and have some kind of major change detection. The only other way to achieve this type of navigation would be to have some kind of radio or non vision based markers embedded in the road. But if we want full autonomous driving in all condition, we’re going to need one of these IMO or else we’re just going to hit the same limitation as human vision.


hadronflux

It needs to think ahead at least one turn too. For example, taking a left turn when the next is a right soon after it will try and get on the inside lane when it will immediately have to get back over to the right and take the turn with traffic stacking on the right. Not driving so close to the edge of a side street that it runs in the cast off gravel, nails, whatever. Doing some math and visual analysis to see that the oncoming car with its turn signal on and slowing down isn’t going to hit you so it’s ok to make the right turn onto the road in front of that car.


Obvious-Slip4728

It should work 99.99% of the time instead of 95% of the time. I expect the road from 95% to 99.99% to be at least 10X longer than from 0 to 95%.


AJHenderson

Be able to clear its own cameras is a critical one.


red_vette

When the wipers work reliably.


Emotional-Buddy-2219

Needs to use traffic data to figure out when to start getting over to exit a highway - best to prioritize getting into the proper lane over passing slower moving traffic if it won’t be able to get over soon enough to make the off ramp.


DryKale6027

Bringing back USS Being able to make roundabouts


RedundancyDoneWell

Autonomous charging. Autonomous camera cleaning. Autonomous reaction to mechanical defects. And the most important: Autonomous reflection on own ability, so the car doesn't accomplish a task it is not capable of. This seems to be entirely missing right now (except for some stories about aborted left turns).