Blog

The OnePlus Pad 3 Takes on Samsung and Apple with Remarkable Features

**Android Central Judgment: OnePlus Pad 3 Review**

The OnePlus Pad series has advanced considerably, with the OnePlus Pad 3 presenting a high-end design and superb software, positioning it as a formidable player in the tablet arena. Although there are some compromises, its pricing renders it a desirable choice.

**Advantages:**
– Remarkable price-to-performance value
– Subtle AI functionalities
– Expansive display paired with an ergonomic keyboard
– Sleek and lightweight construction
– Works with OnePlus Stylo 2
– Enhanced Open Canvas capabilities

**Drawbacks:**
– O+ Connect restricted to macOS
– Absence of a fingerprint scanner
– No microSD card slot
– Utilizes IPS LCD rather than OLED

**Pricing and Availability:**
The OnePlus Pad 3 is up for preorder from June 5 to July 8, with a cost of $699 in a singular color, Storm Blue. Preorders for the Smart Keyboard and Folio Case are also on offer, featuring special promotions for early customers.

**Design and Display:**
The device is equipped with a 13.2-inch LCD display boasting a 144Hz refresh rate, showcasing a premium design akin to top-tier tablets. Even without OLED, the screen remains impressive, accommodating users sensitive to PWM.

**Accessories:**
OnePlus presents a more expansive Smart Keyboard Cover, retaining compatibility with the OnePlus Stylo 2. The keyboard provides a well-known typing experience, although some users might encounter cursor movement challenges.

**Performance:**
With the Snapdragon 8 Elite, 12GB of RAM, and 256GB of storage, the Pad 3 offers robust performance. While it doesn’t support expandable memory, external SSDs function effectively for extra storage requirements.

**Software and AI:**
The Pad 3 emphasizes productivity enhancement with advancements in Open Canvas. AI features are included but are not excessive, featuring a specific AI key and Circle to Search functionality.

**Issues:**
O+ Connect operates smoothly with macOS but does not cooperate with Windows, diminishing its utility for certain users.

**Rivalry:**
The OnePlus Pad 3 stands against Samsung and Apple in the premium tablet marketplace, providing a mix of functions and pricing that renders it an enticing option.

**Should You Purchase It?**
Evaluate the OnePlus Pad 3 if you desire a leading Android tablet with outstanding performance and design at an affordable price. Refrain from buying it if you favor OLED displays or require LTE/5G capabilities.

Read More
The Galaxy Z Fold 7 Probably Set to Debut Lacking Anticipated Magnets: Here’s Why It Shouldn’t Surprise You

What you can expect is “Qi 2 ready”.

(Image credit: Derrek Lee / Android Central)

For the past few years, every significant release of an Android phone has been met with speculation about whether this would be the device compatible with those clever magnetic Qi2 chargers. In nearly every instance, the reply isn’t what users wish to hear, and it seems the Galaxy Z Fold 7 will follow this trend, as it is rumored to lack built-in MagSafe charging.

I’ve mentioned it previously, and I’ll reiterate: stop anticipating it. It’s not that those magnetic Qi2 chargers aren’t useful; I definitely understand the attraction for those who cannot or choose not to put their phone down long enough to recharge, and substantial funds have been allocated to inform you that you should desire it. When a phone is released and you discover you must purchase a case to utilize the magnetic accessories, it’s disheartening.

<figcaption itemprop="caption description"

Read More
Google Introduces ‘Search Live’ AI Mode Trial for Registered Mobile Users

Individuals using Android or iOS devices may want to stay vigilant.

(Image credit: Google)

Key information you should be aware of

  • Google is reportedly initiating a trial for AI Mode in its primary app known as “Search Live” for Android and iOS.
  • This trial, akin to Gemini Live, allows users to interact with AI Mode to retrieve results in real time.
  • During I/O 2025, Google introduced several updates planned for AI Mode, including Project Astra and Mariner integration.

As the week wraps up, Google was reportedly observed commencing a fresh test in Search relating to its AI Mode.

The test that seems to be launching, concerning Search Labs users, is dubbed “Search Live.” Per 9to5Google, which detected the test, this experiment positions AI Mode at the forefront with a familiar Gemini Live ambiance. When activated in the core Google app, through a wavelength icon beneath the search bar, Search Live will greet users with an informative splash page. The company asserts, “With Live, you can engage in a real-time voice dialogue with AI Mode to discover exactly what you seek.”

Gemini Live (even the user interface bears similarities).

Switching Playlists from Spotify and Various Streaming Platforms to YouTube Music

All your tunes consolidated in one spot.

(Image credit: Nicholas Sutrich / Android Central)

Using YouTube Music but don’t wish to recreate that entire magnificent playlist or those (or more) that you assembled in another platform? It could be your gathering playlist for when friends and family come by, a themed collection of songs for the festive period, upbeat summer hits, or even your everyday exercise or running soundtracks. You can swiftly transfer the playlists you’ve crafted to YouTube Music from platforms like Spotify, Apple Music, and other streaming options. Afterward, you can find them all in one location while taking advantage of all that YouTube Music offers.

Steps to transfer playlists to YouTube Music from Spotify

<source type="image/webp" srcset="https://cdn.mos.cms.futurecdn.net/pwVBzMikiWLLubBmiU3tKP-320-80.jpg.webp 320w, https://cdn.mos.cms.futurecdn.net/pwVBzMikiWLLubBmiU3tKP-480-80.jpg.webp 480w, https://cdn.mos.cms.futurecdn.net/pwVBzMikiWLLubBmiU3tKP-650-80.jpg.webp 650w, https://cdn.mos.cms.futurecdn.net/pwVBzMikiWLLubBmiU3tKP-970-80.jpg.webp 970w, https://cdn.mos.cms.futurecdn.net/pwVBzMikiWLLubBmiU3tKP-1024-80.jpg.webp 1024w, https://cdn.mos.cms.futurecdn.net/pwVBzMikiWLLubBmiU3t

Read More
The Scientific Explanations for Why Certain Individuals Have Difficulty in Noisy Surroundings

For certain individuals, loud surroundings can be overwhelming for their cognitive processing. A bustling eatery, a gathering with acquaintances, or simply standing in line to board the next train to work can render it nearly impossible to concentrate on conversations occurring around them. Now, researchers might have finally unraveled the mystery behind this phenomenon.

As outlined in a new study published in *Brain and Language*, this issue stems from specific alterations in the insulae. These are two structures located within the frontal lobe that play a crucial role in higher cognitive functions such as managing emotional and sensory data.

These critical areas are essential to the brain’s functionality, and the researchers found that individuals who experience difficulties hearing speech in a packed environment possess fundamentally different neural wiring. The left insula demonstrates a heightened connection to auditory regions in those individuals.

Moreover, they exhibit signs of this rewiring consistently, even in the absence of active speech amidst noise. According to the researchers, this serves as further proof of how our brains reorganize themselves to fulfill the roles we require. This also prompts fresh inquiries regarding brain functionality, as it was previously assumed that these regions would be less active when the brain was at rest.

However, the fact that they were still engaged consistently, even in the absence of any information to interpret, indicates that we must consider distinct baseline connectivity alterations. But the findings did not end there.

The researchers noted that one participant had relatively inadequate hearing for pure tones. Nonetheless, when evaluating their ability to discern speech amidst noise, that participant outperformed all others. This could indicate that individuals with hearing impairments can potentially rewire their brains through practice in identifying sounds.

This discovery is undoubtedly intriguing and suggests a need for more extensive research into hearing loss. Additionally, given the strong link between hearing loss and dementia, the researchers believe these findings might aid in improving our understanding of cognitive decline. This could hold particular significance, as dementia cases have surged in China recently.

Read More
Google Tests Generative AI for Improved Weather Summaries in Search

Users can receive immediate, AI-driven weather insights directly within their Google Search results.

(Image credit: Nicholas Sutrich / Android Central)

Essential information

  • Google seems to be trialing a generative AI weather summary between hourly and 10-day forecasts in search outcomes.
  • The AI-generated summary offers comprehensive weather insights with a dropdown option and links to pertinent articles.
  • This feature is presently undergoing testing and appears to be confined to Southern California, showing a notable distinction from Pixel 9’s AI Weather reports.

Google is expected to introduce a new generative AI summary for users seeking weather updates for a specific area.

At this moment, individuals frequently searching for weather details for a particular location on Google encounter hourly and 10-day forecasts among the search results. However, soon the search application will be presenting a new summary located between the two forecasts, as previously noted, as reported by 9to5Google.

Apple’s Latest AI Model Examines Speech Patterns to Detect Irregularities and Its Importance

### Apple’s Cutting-Edge Strategy in Speech Recognition and Accessibility

In its continual progress in speech and voice technologies, Apple has recently released a transformative study that highlights a person-centered method to a challenging machine learning task: grasping not just the words spoken but also the manner in which they are articulated. This innovation carries substantial ramifications for accessibility.

#### Voice Quality Dimensions (VQDs)

Within the study, researchers established a framework for speech evaluation based on what they label Voice Quality Dimensions (VQDs). These dimensions encompass measurable characteristics such as intelligibility, harshness, breathiness, and pitch monotony. These are the same features that speech-language therapists consider when evaluating voices impacted by neurological disorders or diseases. Apple is advancing models that can identify these features.

#### Teaching AI to Hear and Understand

Most currently available speech models are predominantly trained on standard, healthy voices, which frequently results in subpar performance with users displaying non-standard speech patterns. This results in a notable accessibility gap. To combat this, Apple’s researchers trained lightweight probes—basic diagnostic models functioning alongside existing speech systems—using a vast public dataset of annotated atypical speech, which includes voices from individuals with ailments like Parkinson’s, ALS, and cerebral palsy.

Rather than concentrating on transcribing spoken language, these models assess how the voice sounds across seven primary dimensions:

1. **Intelligibility**: The clarity of understanding spoken words.
2. **Imprecise consonants**: The precision of consonant pronunciation.
3. **Harsh voice**: A rough or strained quality of the voice.
4. **Naturalness**: The fluidity and typicality of the speech.
5. **Monoloudness**: Consistency in loudness without fluctuation.
6. **Monopitch**: Absence of pitch variation, resulting in a uniform tone.
7. **Breathiness**: A light or whispery quality in the voice.

Essentially, these models have been trained to “listen like a healthcare professional,” prioritizing vocal traits over merely the spoken content.

#### Model Effectiveness and Transparency

Apple employed five models (CLAP, HuBERT, HuBERT ASR, Raw-Net3, SpICE) to extract audio features and developed lightweight probes to predict voice quality dimensions from these characteristics. The probes showed commendable performance across most traits, although results varied by specific attribute and assignment.

A remarkable feature of this research is the transparency of the model’s outputs, which is rare in AI. Rather than offering a vague “confidence score,” the system specifies particular vocal qualities that influence its classifications. This attribute could significantly improve clinical evaluation and diagnosis.

#### Beyond Accessibility

Remarkably, Apple’s research went beyond the realm of clinical speech evaluation. The team experimented with their models on emotional speech using a dataset named RAVDESS. Although not specifically trained on emotional sounds, the VQD models yielded intuitive predictions. For instance, angry voices displayed reduced monoloudness, calm voices received a lower harshness rating, and sad voices were recognized as more monotone.

This study could pave the way for a more relatable Siri, which could adjust its tone and speech in response to the user’s emotional condition, rather than just their verbal expressions.

The complete study is accessible for further exploration on

Read More