I'd be curious about their analysis of the Nvidia self-driving car project which uses world models(?) to train them in far more extensive scenarios, though simulated, than possible in real world case. That keynote was after this article of course.
But I did check their dismissive claim about the 90% coding at anthropic by watching the link they provided. The Anthropic guy said that 90% was achieved at various teams within Anthropic and also hedged about the exact nature of it; it is a messy metric to be precise about. I thought the other was not generous in interpreting it which makes me skeptical of the edge of the rest of the article.
> My suspicion is that it’s parts of the city where you don’t get good signal. Anyway, I don’t know anything about the stack. I’m just making stuff up.
He knows just about as much as the rest of us who have taken a Waymo so he can’t comment on how far along it is.
The great thing about his comment which I have the utmost request for is that he claims to have made it all up or in other words it’s just a wild guess / hypothesis. Many people will not caveat their bullshit with this disclaimer.
I have been dismissed for saying this and using self driving cars as the example. Getting from 95 percent there to 100 percent with AI is going to be nearly impossible. Not impossible, but the time and resource allocation to get one use case, such as self driving cars to a point of usability is going to cost trillions and take decades. Anything that we might want to automate with AI the question needs to be asked is if automating this task worth billions if not trillions of dollars and decades of time.
AI makes a really big first impression. And it looks good at first glance, especially if you aren’t good at/ knowledgeable at what you are asking it to do, but as soon as you know anything about what you are asking it for / to do you realize it is wrong or bad and sometimes incredibly so.
I don’t want this to be dismissive of the technology. It is already having an impact and will continue to do so, but expectations and investment need to be tempered.
https://waymo.com/blog/2024/05/fleet-response?utm_source=cha...
But I did check their dismissive claim about the 90% coding at anthropic by watching the link they provided. The Anthropic guy said that 90% was achieved at various teams within Anthropic and also hedged about the exact nature of it; it is a messy metric to be precise about. I thought the other was not generous in interpreting it which makes me skeptical of the edge of the rest of the article.
He knows just about as much as the rest of us who have taken a Waymo so he can’t comment on how far along it is.
The great thing about his comment which I have the utmost request for is that he claims to have made it all up or in other words it’s just a wild guess / hypothesis. Many people will not caveat their bullshit with this disclaimer.
AI makes a really big first impression. And it looks good at first glance, especially if you aren’t good at/ knowledgeable at what you are asking it to do, but as soon as you know anything about what you are asking it for / to do you realize it is wrong or bad and sometimes incredibly so.
I don’t want this to be dismissive of the technology. It is already having an impact and will continue to do so, but expectations and investment need to be tempered.
https://www.autoblog.com/news/teslas-robotaxis-keep-crashing...