[ad_1]
iPod creator, Nest Labs founder and investor Tony Fadell poses for a photo of OpenAI CEO Sam Altman on Tuesday during an enthusiastic interview at TechCrunch Disrupt 2024 in San Francisco. Speaking about his understanding of the longer history of AI development before the MBA craze and the serious issues around MBA hallucinations, he said: “I’ve been doing AI for 15 years, guys, I’m not just talking about a** – I’m not Sam Altman, okay?”
The comment elicited a surprised murmur of “ohs” from the stunned crowd amid a small handful of applause.
Fadel was on a roll during his interview, touching on a number of topics ranging from what kind of “holes” can produce great products to what’s wrong with today’s MBA.
While acknowledging that LLM holders are “great at some things,” he explained that there are still serious concerns to address.
“MSc students try to make it ‘generic’ because we’re trying to make science fiction happen,” he said. “(The LLM) knows everything…I hate knowing everything.”
Instead, Fadel suggested that he would prefer to use AI agents who are trained for specific things and are more transparent about their mistakes and hallucinations. This way, people will be able to learn all about AI before they are “hired” for the specific job at hand.
“I hire them to… educate me, or I hire them to be my co-pilot, or I hire them to replace me,” he explained. “I want to know what this thing is,” he said, adding that governments should get involved to enforce such transparency.
Otherwise, he noted, companies using AI would be putting their reputations on the line for “some bullshit technology.”
Fadel pointed out, “Currently we are all adopting this thing and we do not know what problems it causes.” He also noted that a recent report stated that doctors who use ChatGPT to generate patient reports experience hallucinations in 90% of them. He continued: “These people can kill people.” “We use these things and we don’t even know how they work.”
(Fadil appears to be referring to the recent report in which researchers at the University of Michigan studying AI clones found a significant number of hallucinations, which could be dangerous in medical contexts.)
The comment about Altman came when he shared with the audience that he has been working with artificial intelligence technologies for years. For example, Nest used AI in its thermostat in 2011.
“We couldn’t talk about AI, we couldn’t talk about machine learning, because people would get too scared — ‘I don’t want AI in my house’ — now everyone wants AI everywhere,” Fadel noted. “.
[ad_2]