News in English

Apple is the first tech giant to get AI right

Unlike Google and Microsoft, Apple is using AI to do what the technology has proved it can do rather than unreliable features.

AI robot hand holding a glowing light bulb shaped like the Apple logo
Apple's new iPhone features are the first time a Big Tech company has made AI useful for normal people.

On Monday, as part of its Worldwide Developers Conference, Apple unveiled software features for its various products, including the iPhone and the iPad. The most anticipated part of the show was getting details on how the company would integrate artificial intelligence into its phones and operating systems.

During the presentation, Apple executives showed off how the tech giant's AI system — which they pointedly referred to as Apple Intelligence instead of artificial intelligence — could help with searching texts and photos, creating images, fixing grammar and spelling, summarizing text, and editing photos.

After the announcement, tech pundits, extremely online billionaires, and cheap seats the world over complained that the features were small potatoes. CNET's Katie Collins wrote that Apple's most interesting new features were long overdue, summing up her reaction as "finally." Bloomberg's Mark Gurman called them "minor upgrades." My colleague Jordan Hart said they weren't the silver bullet Apple needed to reinvigorate the company. And Elon Musk registered his disappointment by sharing a stupid meme. In sum, many people are underwhelmed by Apple's practical integration of AI. Sure, maybe summarizing long emails and making transcripts of calls sounds boring compared with conjectures that AI could be used to detect cancer earlier, but guess what? Apple's scale and specificity of vision also make it the first Big Tech company to get AI integration right.

Apple is using AI to do what the technology has proved it can do: be an assistant. Yes, the virality of OpenAI's ChatGPT-3 put AI's potential on display. But using AI to power a robot that does your chores or to answer open-ended questions is still extremely imperfect. Chatbots lie, they hallucinate, they tell my colleagues to eat glue. Google's rollout and subsequent rollback of offering AI answers to people's search queries is just one sign that the current iteration of the tech isn't ready for all the use cases Silicon Valley is dreaming about — to say nothing of the venture capitalist Marc Andreessen's claims that AI will be able to "save the world," "improve warfare," and become our therapists, tutors, confidants, and collaborators, ushering in a "golden age" of art.

Apple's updates are an appeal for everyone to get a grip. They are a clarion call for other tech companies to be practical with what they promise consumers and to deliver AI products that make our lives incrementally easier instead of confusing us with overpromises. Apple's use of the best of AI is also the best way for normal people to develop an understanding of what it can do. This is a way to build trust. Sure, maybe one day AI will figure out how to destroy civilization or whatever, but right now it's best at finding that photo of your dog dressed as a pickle you took back in 2019. And for the vast majority of people, that's perfectly fine.

What does AI do?

The fact that people are disappointed in Apple says more about the hype around AI's capabilities than it does about Apple. Musk has since 2019 been promising that Tesla will make a self-driving robocar, and for even long he's been overselling his driver-assistance technology as "autopilot." OpenAI's internal arguments turned palace intrigue turned media fodder are mostly centered on concern for the speed at which AI's supposedly fearsome power will reshape humanity, not the limitations of its current practical application. The biggest models, the most powerful Nvidia chips, the most talented teams poached from the hottest startups — that is the drumbeat of AI news from Silicon Valley and Wall Street. We've seen tech hype cycles before; they're mostly about raising money and selling stock. Only time will tell if the investments Wall Street and Silicon Valley are making in AI infrastructure will actually produce commensurate returns. That's how this game goes.

Apple's updates are an appeal for everyone to get a grip.

But in all that noise, the reality of what AI is good (and bad) at right now has gotten lost — especially when it comes to the large language models that undergird most of the new AI tools consumers will use, like virtual assistants and chatbots. The tech is based on pattern recognition: Rather than make value judgments, LLMs simply scan a vast library of information they've hoovered — books, webpages, speech transcripts — and guess which word most logically comes next in the chain. There is an inherent limitation in that design. Sometimes facts are improbable, but what makes them facts is that they are provable. It might not make sense that Albany, not New York City, is the capital of the state of New York, but it's a fact. It might make sense to use glue, an adhesive, to stick cheese on pizza, if you're a robot with no context for what "food" is. But that's definitely not how it's done. Such as they are, large language models can't make this value judgment between pattern and fact. It's unclear whether they'll ever be able to. Yann LeCun, Meta's lead AI scientist and one of the "godfathers of AI," has said that LLMs have a "very limited understanding of logic" and that they "do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan." He has also said they cannot learn anything beyond the data they're trained on — anything new or original — which makes them mentally inferior to a house cat.

In other words, they're not perfect.

Enter Apple, a company known for a culture of perfection. It was slow to embrace the hype surrounding AI, and, as I mentioned, for a while it refused to use the term "artificial intelligence," instead preferring the long dethroned, snoozefest name "machine learning." Apple started developing its own generative AI after ChatGPT-3 launched in 2022, but it revealed the new features only when it felt they were good and ready. This tech is what will power features like Genmoji, which allows you to describe a custom emoji to fit whatever's going on and then creates it — say, one of you crying while eating an entire pizza. It will also power more-practical applications, like writing an email to your boss when you're sick or pulling up that link your mom sent you in a text message. Right now, these basic call-and-response applications are the things at which LLMs excel.

Apple's rigorous standards serve as a way to firmly establish AI's present capabilities — or limitations, depending on how you see the glass.

If you want to use the latest Apple products to get into the freakier and more fungible world of talking to a chatbot, Siri will call up ChatGPT for you and let you run wild. This is Apple making a clear delineation between where its reliability ends and where a world of technological inconsistency begins. For Apple, this distinction makes sense. It wants its products to be associated with cutting-edge technology but also efficacy and productivity.

The distinction, however, does not serve the rest of Silicon Valley or their venture-capital investors. Anyone fundraising or investing in this technology would prefer you see the capabilities and value of AI as a moving target — specifically moving up, to the right, and fast. Apple's rigorous standards serve as a way to firmly establish AI's present capabilities — or limitations, depending on how you see the glass. The alternative is what we're seeing at other companies, where users are guinea pigs, used to working with tech that makes them question what they see. Societies around the world are already grappling with a crisis of faith in institutions; faulty AI just spreads that mistrust wider and faster. It's another stone in the wall between people's faith and what they read on the internet. In that way, Apple's cautious approach may be a service for the rest of the tech industry. By slowly acclimatizing its constellation of users to AI that makes their lives better instead of frustrating them, Apple makes the tech feel like a natural upgrade instead of an unreliable, scary intrusion.

Sure, Apple's AI may not be sexy or scary, but at least it doesn't seem stupid. Ideally, that means it won't make our world any stupider either.


Linette Lopez is a senior correspondent at Business Insider.

Read the original article on Business Insider

Читайте на 123ru.net