Apple’s Covert Zurich Research Lab Filled with Former Google AI Stars


Apple AI

According to a report from the Financial Times, Apple has hired dozens of artificial intelligence experts from Google. The company has set up a new, secretive laboratory in Zurich, Europe, where this newly formed team will work on developing new AI models and products. This move indicates Apple’s strong commitment to advancing its AI capabilities, possibly aiming to enhance existing services or create new ones that leverage cutting-edge AI technology.

According to an analysis by the Financial Times, which examined LinkedIn profiles, Apple has hired at least 36 specialists from Google since 2018. This strategic recruitment began around the time Apple hired John Giannandrea, a former Google executive, to lead its artificial intelligence efforts. Giannandrea’s move to Apple marked a significant bolstering of the company’s AI capabilities, as he brought extensive experience from his previous role overseeing AI and search at Google. This hiring spree underscores Apple’s serious investment in enhancing its AI expertise and capabilities.

Apple Acquires AI Startup in ZurichApple AI

Apple’s primary artificial intelligence teams are based in California and Seattle. However, the company has recently increased its AI-focused presence in Zurich, Switzerland. This expansion was influenced in part by Apple’s acquisitions of local AI startups: FaceShift, which specializes in virtual reality, and Fashwell, known for its image recognition technology. These acquisitions have led Apple to establish a secretive research facility in Zurich, known as “Vision Lab.” This lab focuses on developing advanced AI applications, likely enhancing Apple’s capabilities in areas such as augmented reality, virtual reality, and computer vision technologies.

According to the report, the employees at Apple’s secretive “Vision Lab” in Zurich are deeply involved in researching the underlying technologies similar to those used by OpenAI’s ChatGPT and other products based on large language models (LLMs). The lab’s focus is on developing more sophisticated AI models that not only process text but also incorporate visual inputs to generate responses to queries. This multidisciplinary approach aims to create AI systems that are more intuitive and capable of understanding and responding to a wider range of human interactions, blending textual and visual information seamlessly. This research could potentially lead to significant advancements in how AI is used in various Apple products, enhancing User interfaces and interactions across its device ecosystem.

The report indicates that Apple’s recent focus on developing large language models (LLMs) is a direct extension of its long-term efforts with Siri, its voice-activated assistant. Over the past decade, Apple has invested heavily in enhancing Siri’s capabilities, which involves complex AI technologies, including speech recognition and natural language processing. The move towards more advanced LLMs represents a progression in these areas, aiming to improve how machines understand and generate human-like text and responses.

Apple to Create a More Powerful User Experience using AIApple AI

This advancement in LLMs could enable Siri and other Apple services to handle more sophisticated tasks, understand context better, and interact in a more conversational and helpful manner. By integrating enhanced AI models that incorporate both text and visual inputs, Apple is looking to create more powerful and intuitive user experiences across its product lineup.

The company has long been aware of the potential of “neural networks” — a form of AI inspired by the way neurons interact in the human brain and a technology that underpins breakthrough products such as ChatGPT.

Chuck Wooters, an expert in conversational AI and LLMs who joined Apple in December 2013 and worked on Siri for almost two years, said: “During the time that I was there, one of the pushes that was happening in the Siri group was to move to a neural architecture for speech recognition. Even back then, before large language models took off, they were huge advocates of neural networks.”

High Profile Google Employees Join Apple’s AI Team

Apple’s top AI team includes several high-profile former Google employees, highlighting the company’s commitment to strengthening its expertise in artificial intelligence. John Giannandrea, who was once the head of Google Brain — now a part of DeepMind — is a key figure at Apple, where he leads AI efforts. Another notable figure is Samy Bengio, now senior director of AI and ML (Machine Learning) research at Apple, who was previously a prominent AI scientist at Google. Additionally, Ruoming Pang, who leads Apple’s “Foundation Models” team focused on developing large language models, also came from Google, where he was in charge of AI speech recognition research.

Gizchina News of the week


Read Also:  Google CEO admits "AI Overview" function has "hallucination" problem

These recruitments from a competitor well-known for its AI advancements reflect Apple’s strategic efforts to build a robust team that can drive innovation in AI and machine learning technologies. This team’s work is crucial for the development of more sophisticated AI capabilities across Apple’s products and services, enhancing everything from voice assistants to more complex AI-driven applications.

Apple Acquired Another AI Startup company in 2016Apple AI

In 2016, Apple made a strategic acquisition of Perceptual Machines, a company specializing in generative AI-powered image detection technologies. This company was founded by Ruslan Salakhutdinov, a prominent figure in the field of neural networks and a scholar from Carnegie Mellon University. Salakhutdinov has a significant background in AI, having studied under Geoffrey Hinton at the University of Toronto. Hinton, often referred to as the “godfather” of neural networks, has had a profound impact on the development of AI technologies. Notably, he expressed concerns about the potential dangers of generative AI, leading to his departure from Google the previous year.

This acquisition underscores Apple’s interest in enhancing its capabilities in AI, particularly in the realm of image recognition and generative AI, which are crucial for the development of new products and the improvement of existing ones, such as photo management apps, augmented reality applications, and more personalized user experiences across Apple’s device ecosystem.

Why Apple has Delayed in the AI Field Apple AI

Ruslan Salakhutdinov explained to the financial times that one reason for Apple’s cautious approach to deploying AI technologies, particularly language models, is their propensity to generate incorrect or problematic responses. He stated, “I think they are just being a little bit more cautious because they can’t release something they can’t fully control.” This reflects Apple’s commitment to quality and reliability, ensuring that any AI features introduced in its products meet the company’s high standards for accuracy and user safety. Apple’s conservative strategy in AI deployment highlights its priority to avoid the potential risks associated with AI, such as the dissemination of misinformation or offensive content, which could undermine user trust or cause reputational damage.

iOS 18 will bring a significant upgrade to various Apple apps by integrating new generative AI features. These enhancements are coming across a broad spectrum of applications including Siri, Spotlight, Shortcuts, Apple Music, Messages, Health, and Apple’s productivity suite consisting of Keynote, Numbers, and Pages. The implementation of these features is expected to be powered by Apple’s own on-device large language models (LLMs), which aligns with the company’s focus on privacy and security by processing data locally rather than on cloud servers.

Additionally, reports have claimed that the company has been exploring partnerships with major AI players such as Google, OpenAI, and Baidu. These collaborations could potentially expand Apple’s AI capabilities or enrich the AI features offered in iOS by integrating external expertise and technologies. Such partnerships might also help them accelerate the development and refinement of AI functionalities within its ecosystem, although the extent and nature of these collaborations remain unclear. This move indicates Apple’s commitment to advancing its AI technology while possibly maintaining a balance between in-house developments and strategic external alliances.

Apple AI: When will the Company Make the Announcement?

We Apple to unveil the new AI features for iOS 18 at the upcoming Worldwide Developers Conference (WWDC), which begins on June 10. This event is a significant platform for Apple to introduce major software updates and innovations. Attendees and viewers can anticipate their first look at how Apple intends to integrate AI across its ecosystem, potentially including enhancements to Siri, Spotlight, and other core applications. The WWDC is often a showcase for Apple’s latest technological advancements, and with the focus on AI this year, it promises to offer exciting insights into the future capabilities of iOS devices.

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous Which Smartphones Will Receive the Android 15 Update? Check Out the List!
Next HyperOS update is released for the Xiaomi SU7 car!