Day: October 16, 2025

Spotify DJ Unveils Text-Based Requests and Expands Language Options to Spanish

**Spotify Enhances AI DJ Features with Text Requests and Spanish Language Support**

In May, Spotify launched an interactive iteration of its AI-driven DJ, enabling users to request music in real-time. Initially limited to voice commands, the feature has now been upgraded to incorporate typed requests, along with other enhancements.

### New Features of the AI DJ

Spotify’s AI DJ interactivity was introduced as a beta feature, and the company has been collecting user feedback to improve the user experience. The latest updates feature:

– **Text Requests**: This was the top request from users on social platforms. Now, users can not only use voice commands but also type their music requests, providing greater convenience in busy, noisy, or quiet settings.

– **Spanish Language Support**: First rolled out in 2024 for Spanish-speaking audiences, DJ Livi can now handle both typed and spoken requests, ensuring it matches the functionality of the English-speaking DJ.

Furthermore, the AI DJ will suggest three personalized prompts, aiding users in finding songs and playlists tailored to their listening preferences while demonstrating the DJ’s capabilities.

For more information regarding these updates, check out Spotify’s official announcement.

Read More
Meta Employs Apple’s Recently Assigned Leader for AI Search Initiative

Meta has maintained its pattern of enticing elite talent from Apple’s AI sector, with the recent onboarding of Ke Yang, who previously was assigned to spearhead Apple’s new AI search initiative aimed at improving Siri’s functions. This development is part of a wider trend of significant exits from Apple, as the firm contends with rising competition in the AI domain.

Recently, numerous prominent members of Apple’s machine learning division have departed for competitor firms, such as Anthropic and OpenAI, with Meta notably benefiting from this outflow of talent. Key exits include Jian Zhang, Apple’s past Lead AI Researcher for Robotics, Ruoming Pang, who managed the foundation models team, and Frank Chu, who directed cloud infrastructure and search for Apple AI.

Yang’s latest appointment to lead the Answers, Knowledge, and Information (AKI) team was regarded as a tactical decision for Apple, as the team was charged with crafting features to enhance Siri’s competitiveness against AI models like ChatGPT. The AKI team was created to develop a new search experience that would utilize web information, aiming to improve Siri’s capabilities.

Nonetheless, Yang’s move to Meta brings up uncertainties regarding the future leadership of the AKI team. He took charge of the group after the departure of Robby Walker, who was previously overseeing the Siri overhaul. With Yang’s exit, it remains unclear who will succeed him, particularly as John Giannandrea, Apple’s senior vice president of AI, has experienced a reduction in his influence due to hurdles in deploying new AI features.

As Apple navigates this talent exodus and strives to strengthen its AI efforts, competition in the technology sector continues to escalate, with Meta positioning itself as a significant contender in the AI arena.

Read More
Sonos Improves Trueplay Functionality to Accommodate iPhone 17 Series

**Sonos Enhances Trueplay Feature with iPhone 17 Compatibility**

Sonos has recently upgraded its Trueplay feature, which now includes support for the iPhone 17, allowing users to improve the sound quality of select speakers. This enhancement is important for both audiophiles and casual listeners, as it facilitates the adjustment of audio performance based on specific room acoustics.

### What is Trueplay?

Trueplay is a distinctive feature found on Sonos speakers, such as the Sonos Arc Ultra, Era 100, and Era 300. It provides two tuning modes: quick tuning and advanced tuning. Quick tuning uses the built-in microphone of the speakers to modify sound according to how it bounces off different surfaces in a room. Conversely, advanced tuning leverages the microphone of the iPhone for even more accurate adjustments.

Sonos characterizes Trueplay as a technology that assesses how sound interacts with walls, furnishings, and other surfaces, ensuring that the speaker delivers optimal sound quality irrespective of its position in the room.

### Swift Adoption of iPhone 17 Compatibility

Traditionally, Sonos has been slow to implement Trueplay support for new iPhone models soon after their debut. However, the company has provided a quick update this time, with Trueplay compatibility for the iPhone 17 becoming available less than a month post-launch. This quick action has been positively acknowledged by users, with one Reddit user expressing gratitude for the prompt update, recalling prior delays that lasted into the holiday period.

### Availability and Deployment

Although the feature is reported to be accessible, it has not been extensively highlighted in recent release notes, suggesting that the rollout might still be underway. Users who do not see the feature right away should anticipate its availability soon.

### Conclusion

The enhancement of Sonos’ Trueplay feature to accommodate the iPhone 17 represents a notable advancement in user experience, providing improved audio calibration functionalities. As Sonos continues to evolve and meet user demands, the integration of cutting-edge technology into their products remains a vital priority.

Read More
Comparative Examination of Apple’s M5 and M4 Silicon: Main Advancements and Enhancements

Today, Apple unveiled the newest addition to their M-series lineup of Apple silicon, the M5 chip. Here’s how it stacks up against the M4 chip.

## Neural and AI Performance

The standout feature of M5 is a significant enhancement in performance for AI and ML applications. Similar to the A19 and A19 Pro found in the latest iPhone models, the M5 incorporates new Neural Accelerators that greatly increase the speed for tasks involving machine learning or artificial intelligence. Apple provides several examples:

– Time to first token (LLM): 3.6x quicker than M4
– Time to enhance video in Topaz Video: 1.8x quicker than M3
– Time to render in Blender with ray tracing: 1.7x quicker than M3
– Time to enhance voice with AI in Premiere Pro: 2.9x quicker than M3

In addition to the new Neural Accelerators, Apple claims that M5 features an “enhanced 16-core Neural Engine,” which is expected to assist with ML and AI activities.

## Memory Bandwidth

Just like the A19 and A19 Pro from the newest iPhone models, the M5 chip provides a boost in memory bandwidth. It delivers 153GB/s, a 27.5% increase over the M4’s 120GB/s. While this can enhance speed for any demanding workflow, it particularly aids generative AI tasks.

## Graphics

M5 also brings enhancements to graphics performance. Thanks to the new Neural Accelerators, Apple asserts that M5 achieves over 4x the peak GPU compute performance in comparison to the M4 chip. M5 is equipped with a 3rd-generation ray tracing engine and “enhanced graphics capabilities,” though Apple does not specify the exact nature of these enhancements. They do mention that, due to both features, M5 can achieve “up to 45 percent higher” graphics performance than M4.

## Processor

Like prior baseline Apple silicon chips, M5 features a 10-core CPU, with six of those cores designated as efficiency cores and four as performance cores. Consistent with previous models, Apple states that the new performance core is the “world’s fastest performance core.” Apple also notes that the new CPU achieves up to 15% faster multithreaded performance compared to M4, aligning with leaked benchmark data.

## Other Tidbits

The new 14-inch MacBook Pro with M5 now includes a 4TB storage option. Previously, the largest option available was 2TB. Lastly, M5 transitions from TSMC’s 2nd-generation 3nm process to their 3rd-generation 3nm process.

The M5 chip is currently available in the 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro.

Read More