This Week in AI: A preview of the AI ​​stacked boards at Disrupt 2024

[ad_1]

Hey guys, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

This week, the TechCrunch crew (including yours truly) is attending TC’s annual Disrupt conference in San Francisco. We have a great line-up of speakers from the AI ​​industry, academia, and policy, so instead of my usual editorial, I thought I’d preview some of the great content heading your way.

My colleague Devin Koldewe will be interviewing Perplexity CEO Aravind Srinivas on stage. The AI-powered search engine is on a meteoric rise, recently reaching 100 million queries submitted weekly — but it’s also being sued by News Corp’s Dow Jones over what the publisher describes as “content kleptocracy.”

Kirsten Korosec, TC’s transportation editor, will chat with Zoox co-founder and CTO Jesse Levinson fireside. Levinson, who has been in the thick of self-driving car technology for a decade, is now preparing the Amazon-owned robotaxi company for its next big adventure — and we’ll report on it.

We’ll also have a panel discussion on how AI is flooding the web with misinformation — featuring Meta Oversight Board Member Pamela St. Martin, Center for Countering Digital Hate CEO Imran Ahmed, and UC Berkeley CITRIS Policy Lab Founder Brandi Nonicki. The trio will discuss how generative AI tools are being misused more broadly, by a range of actors, including government actors, to create deepfakes and sow disinformation, as AI tools become more widely available.

We’ll hear from Jingna Zhang, CEO of Cara, AI Now Institute Co-CEO Sarah Myers-West, and ElevenLabs’ Alexandra Piedraszewska about the legal and ethical minefields of AI. The rapid rise of artificial intelligence has created new ethical dilemmas and exacerbated old ones, while lawsuits between left and right are declining. This threatens both new and established AI companies, and the creators and workers whose work is fueled by the models. The panel will address all of this – and more.

This is just a sample of what’s on deck this week. Expect appearances from AI experts such as US AI Safety Institute Director Elizabeth Kelly, California Senator Scott Wiener, Berkeley AI Policy Center Co-Director Jessica Newman, Luma AI CEO Amit Jain, Suno CEO Mickey Schulman, and Splice CEO Kakul Srivastava.

news

Apple Intelligence launches: With this free software update, iPhone, iPad, and Mac users can access the first set of Apple Intelligence capabilities powered by AI.

Brett Taylor’s startup raises new funds: Sierra, the AI ​​startup co-founded by OpenAI CEO Brett Taylor, has raised $175 million in a funding round that values ​​the startup at $4.5 billion.

Google expands its AI overview: The “AI Overviews” feature in Google Search, which displays a snapshot of information at the top of the results page, has begun rolling out in more than 100 countries and regions.

Generative AI and e-waste: Researchers predict that the massive, rapidly advancing computing demands of AI models could lead to the industry eliminating electronic waste equivalent to more than 10 billion iPhones annually by 2030.

Open source, now defined: The Open Source Initiative, a long-standing organization that aims to define and “govern” all things open source, this week released version 1.0 of its definition of open source AI.

Meta releases its own podcast generator: Meta has released an “open” implementation of Google’s viral podcast creation feature NotebookLM.

Hallucinogenic versions: OpenAI’s Whisper transcription tool suffers from hallucination problems, researchers say. Whisper is said to have inserted everything from racist comments to imagined treatments into scripts.

Research paper of the week

Google He says She taught a model for converting handwriting images into digital ink.

The model, InkSight, is trained to recognize words written on a page and output strokes that look almost like handwriting. The Google researchers behind the project say the goal is to “capture details of the handwriting path at the line level” so that the user can store the resulting lines in the note-taking app of their choice.

Image credits:Google

InkSight isn’t perfect. Google notes that it makes mistakes. But the company also claims that the model performs well across a range of scenarios, including difficult lighting conditions.

Let’s hope it’s not used to forge any signatures.

Model of the week

Cohere for AI, a non-profit research lab run by AI startup Cohere, has released a new family of text generation models called Aya Expanse. The models can write and understand text in 23 different languages, and Cohere claims to outperform models, including Meta’s Llama 3.1 70B, on certain benchmarks.

Cowher says a technique he calls “data arbitrage” was key to Aya Expanse’s training. Inspired by how humans learn by going to different teachers for unique skills, Koher chose multilingual “teacher” models with special capabilities to create synthetic training data for Aya Expanse.

Synthetic data has its problems. Some studies suggest that overreliance on them can lead to models whose quality and variety progressively deteriorate. But data arbitrage effectively mitigates this problem, Cowher says. We’ll soon see if the claim stands up to scrutiny.

Grab a bag

OpenAI’s Advanced Voice Mode, the company’s photorealistic voice feature for ChatGPT, is now available for free in the ChatGPT mobile app for users in the EU, Switzerland, Iceland, Norway as well as Liechtenstein. Previously, users in those regions had to subscribe to ChatGPT Plus to use advanced voice mode.

A recently A New York Times article highlighted the pros and cons of Advanced Voice Mode, such as its reliance on metaphors and stereotypes when trying to communicate in ways users request. Advanced Voice Mode has risen to prominence on TikTok for its uncanny ability to imitate voices and accents. But some experts warn that this could lead to emotional dependence on a system that lacks intelligence or empathy.

[ad_2]

Leave a Comment