Tag: Source: Arstechnica.com

More than 25% of the new Google code is now produced by AI, according to the CEO.

# AI in Software Development: A New Epoch of Coding Instruments

Since humanity began constructing, we have depended on tools to expedite, enhance, and refine the process. From primitive stone implements to today’s advanced machines, every era of tools has paved the way for the creation of even more sophisticated instruments. In the field of software development, this legacy persists with the emergence of artificial intelligence (AI) as a formidable ally for human coders.

AI is now significantly involved in software development, not merely as a visionary notion but as a pragmatic instrument that is already transforming how code is authored, assessed, and launched. As Google’s CEO Sundar Pichai disclosed during the company’s Q3 2024 financial call, AI systems currently produce over a quarter of the new code for Google’s offerings. This AI-generated code is subsequently evaluated and polished by human engineers, fostering a cooperative atmosphere where machines and humans collaborate to shape the future of technology.

## The Ascendance of AI-Enhanced Coding

The application of AI in coding extends well beyond Google. According to Stack Overflow’s 2024 Developer Survey, more than 76% of developers are either utilizing or intend to embrace AI tools in their development workflows, with 62% already actively integrating them. Likewise, a 2023 GitHub survey identified that 92% of developers based in the U.S. are employing AI coding instruments both professionally and personally.

AI-assisted coding first captured widespread attention with the launch of GitHub Copilot in 2021. Fueled by OpenAI’s Codex model, Copilot could propose code completions and even create new code following natural language directives. Since that time, AI-driven coding tools have rapidly advanced, with key players such as Google, Meta, Anthropic, and Replit all crafting their own AI-integrated coding helpers.

Indeed, GitHub Copilot has consistently broadened its functionalities. In October 2024, GitHub revealed that developers could now utilize non-OpenAI models, such as Anthropic’s Claude 3.5 and Google’s Gemini 1.5 Pro, to generate code within the platform. This diversification of AI models in coding tools illustrates the increasing demand for more adaptable and potent AI solutions in software development.

## The Advantages and Pitfalls of AI in Coding

AI-assisted coding presents numerous advantages. It can substantially enhance productivity by automating monotonous tasks, proposing code snippets, and even detecting possible bugs before they escalate into major concerns. For companies like Google, this has led to swifter development cycles and more effective utilization of engineering resources.

Nevertheless, the incorporation of AI in coding is not devoid of risks. A 2023 study from Stanford University indicated that developers employing AI coding aids tended to introduce more bugs into their code, despite believing their code to be more secure. This paradox underscores the potential hazards of excessive dependence on AI-generated code without adequate oversight.

Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, has highlighted that while AI can be a valuable resource, it does not replace human expertise. “More code isn’t better code,” she stressed, contending that the quality of the code is what ultimately counts. AI-generated code still necessitates meticulous review and testing by seasoned developers to guarantee it upholds the required standards of reliability and security.

## A Historical Perspective: Reluctance to Transformation

The apprehensions regarding AI in coding are not entirely unprecedented. Throughout the timeline of software development, significant technological transitions have frequently faced skepticism and resistance. For instance, when higher-level programming languages such as Fortran and C were unveiled, some developers feared a loss of mastery over the minutiae of their code. Similarly, the embrace of object-oriented programming in the 1990s ignited discussions about code complexity and performance overhead.

Currently, AI-assisted coding is encountering analogous scrutiny. Some developers express concern that excessive reliance on AI tools could lead to a deterioration of coding skills or result in software that is more challenging to debug and maintain. Former Microsoft VP Steven Sinofsky, however, asserts that these apprehensions represent a recurring theme. “If you think functional AI assisting in coding will make humans less intelligent or isn’t authentic programming, just remember that’s been the argument against every generation of programming tools since Fortran,” he remarked.

Indeed, even seemingly trivial advancements like syntax highlighting in text editors were once contentious. Today, features such as syntax coloring are deemed essential for enhancing code clarity and minimizing errors. As AI tools become more embedded in the development process, it’s probable that many of the current concerns will dissipate, mirroring the debates of yesteryears.

## Tools Creating Tools: The Horizon of AI in Development

At its essence, the application of AI in software development is an extension of a long-established trend: employing tools to fabricate more sophisticated tools. Just as early engineers harnessed computers to design the next generation of microchips, contemporary developers are utilizing AI to compose the code that fuels the software.

Read More
Apple Introduces Revamped MacBook Pro Series Showcasing New M4 Processors and Robust M4 Max Variant

# Apple’s Latest M4 MacBook Pro Range: Enhanced Memory, Superior Display Support, and Upgraded Performance

Apple has updated its MacBook Pro range again, this time unveiling the M4 chip. Following the recent refresh of the iMac and Mac mini, the new M4 MacBook Pros offer notable enhancements in memory, display capabilities, and computational power, making them an attractive choice for casual users as well as professionals.

## Key Features of the M4 MacBook Pro Range

– **Base Model Priced at $1,599**: The 14-inch MacBook Pro equipped with the standard M4 chip starts at $1,599, featuring 16GB of RAM and 512GB of storage.
– **Enhanced Memory**: The base configuration now includes 16GB of RAM, addressing a major drawback of earlier models that began with only 8GB.
– **Superior Display Support**: The M4 MacBook Pro can now manage two external displays along with the built-in screen, a marked enhancement compared to the M3 version.
– **Additional Thunderbolt Ports**: The base M4 model offers three Thunderbolt 4 ports, one more than the previous model.

## An In-Depth Examination of the M4 MacBook Pro

### Consistent Design

Apple has closely retained the design of the prior M3 MacBook Pro models, even keeping the default desktop wallpaper the same. While this design consistency may not be revolutionary, it ensures familiarity for users who appreciate the sleek and professional look of the MacBook Pro.

### Performance Enhancements with the M4 Chip

The M4 chip delivers a significant performance increase compared to the M3, especially in the base model. The standard M4 setup includes 10 CPU cores (four performance cores and six efficiency cores), allowing for improved multitasking and overall system performance. Furthermore, the M4 chip supports the connection of three displays at once—two external and the built-in screen—making it a more adaptable choice for users who require multiple monitors for their tasks.

### Memory and Storage Options

The base variant now begins with 16GB of RAM, marking a substantial upgrade from the 8GB provided in previous iterations. This adjustment makes the $1,599 MacBook Pro a more appealing option for users who require greater memory for tasks such as video editing, graphic design, or running several applications simultaneously. The storage capacity remains at 512GB for the base version, with the option to enhance.

### Thunderbolt Ports and Display Capabilities

A key feature of the new M4 MacBook Pro is the addition of an extra Thunderbolt port. The base version now includes three Thunderbolt 4 ports, compared to the two available in the M3 model. This additional port provides increased versatility for connecting external devices like monitors, hard drives, and other peripherals.

Regarding display capabilities, the M4 MacBook Pro can manage two external displays along with the built-in screen, resulting in a total of three displays. This is a substantial upgrade from the M3 model, which was limited to driving two external displays if the built-in display was deactivated.

## M4 Pro and M4 Max: Designed for Power Users

For those seeking even higher performance, Apple presents the M4 Pro and M4 Max models of the MacBook Pro. These versions come with extra CPU and GPU cores, enhanced RAM, and superior display support.

### M4 Pro: Expanded Cores and RAM

The M4 Pro variant of the 14-inch MacBook Pro starts at $1,999 and boasts up to 14 CPU cores (10 performance and 4 efficiency) and 20 GPU cores. It also includes 24GB of RAM by default, an increase of 6GB over the M3 Pro. For users needing additional memory, the M4 Pro can be configured with up to 64GB of RAM. Moreover, the Thunderbolt ports on the M4 Pro support the latest Thunderbolt 5 standard, offering up to 120Gbps of bandwidth, which is a significant upgrade from the 40Gbps provided by Thunderbolt 4.

### M4 Max: The Pinnacle of Performance

Topping the range is the M4 Max, available in both 14-inch and 16-inch models. The M4 Max features up to 16 CPU cores (12 performance and 4 efficiency) and an impressive 40-core GPU, making it particularly well-suited for users who demand extreme performance for applications such as 3D rendering, video editing, or machine learning.

The M4 Max starts with 36GB of RAM and can be upgraded to 48GB, 64GB, or even 128GB, although the highest configuration incurs an additional cost of $1,200. The M4 Max also allows for connection to up to four external displays, including three 6K 60Hz displays via Thunderbolt and one 4K 144.

Read More
Hyundai Teases Future Three-Row Ioniq 9 Electric SUV

### Hyundai Ioniq 9: The Dawn of Roomy EV Innovations

In December, Hyundai is poised to officially debut its newest electric vehicle, the **Ioniq 9**, a three-row SUV that aims to redefine electric vehicle (EV) design and capabilities. Constructed on Hyundai Motor Group’s esteemed **E-GMP platform**, which also supports the Ioniq 5 and Ioniq 6, the Ioniq 9 is set to deliver a roomy, family-oriented EV experience. Prior to the grand unveiling, Hyundai has provided teasers of some design features, offering a sneak peek into what this eagerly awaited model has in store.

#### A Glimpse of What’s Ahead

Hyundai has released preliminary sketches and pictures of the Ioniq 9, highlighting its unique design characteristics. Like its smaller counterparts, the Ioniq 5 and Ioniq 6, the Ioniq 9 will boast **”parametric pixels”** in its headlamps. This design trait, which imparts a pixelated, 8-bit look to the vehicle, has become a hallmark of Hyundai’s Ioniq series. The teaser visuals showcase a stylish, contemporary front end, with the parametric pixel headlamps adding a forward-thinking element.

![Hyundai Ioniq 9 Headlights](https://cdn.arstechnica.net/wp-content/uploads/2024/10/63893-HyundaiMotorPresentsFirstLookatIONIQ9EmbarkingonaNewEraofSpaciousEVDesign-scaled.jpg)
*Credit: Hyundai*

#### Constructed on the E-GMP Framework

The Ioniq 9 will be developed on Hyundai’s **E-GMP (Electric Global Modular Platform)**, which has already demonstrated its prowess in the Ioniq 5 and Ioniq 6. This platform is recognized for its versatility, efficiency, and capability to accommodate both rear-wheel and all-wheel drive setups. It also facilitates rapid charging and extensive range, making it a solid base for Hyundai’s expanding EV collection.

The E-GMP platform has previously been utilized to create another large electric SUV, the **Kia EV9**, which has been available for some time. The Kia EV9, which shares numerous components with the Ioniq 9, has achieved commercial success, even with its higher price. This indicates robust demand for sizable electric SUVs, particularly in regions like the United States, where larger vehicles are preferred.

#### Design Philosophy: “Aerosthetic”

Hyundai has unveiled a fresh design ethos with the Ioniq 9, termed **”Aerosthetic.”** This design strategy seeks to merge **aerodynamic efficiency** with **visual allure,** culminating in a vehicle that stands out aesthetically while being highly effective. The Ioniq 9’s silhouette exhibits a sweeping roofline, which presents a more dynamic profile compared to the boxier designs of some prior models from Hyundai.

![Hyundai Ioniq 9 Side View](https://cdn.arstechnica.net/wp-content/uploads/2024/10/63894-HyundaiMotorPresentsFirstLookatIONIQ9EmbarkingonaNewEraofSpaciousEVDesign-1440×2085.jpg)
*Credit: Hyundai*

The design team at Hyundai has also drawn from **traditional Korean garments**, particularly the **hanbok,** influencing the character lines along the sides of the vehicle. These lines lend the Ioniq 9 a distinct, flowing aesthetic, distinguishing it from other SUVs on the market.

#### Wheels and Aerodynamic Features

The teaser visuals also showcase the Ioniq 9’s **multispoke alloy wheels**, which incorporate aerodynamic features aimed at minimizing drag. This emphasis on aerodynamics aligns with Hyundai’s dedication to enhancing the efficiency of its electric vehicles, ensuring that the Ioniq 9 not only possesses visual appeal but also excels in range and power consumption.

!

Read More
Apple Enhances M2 and M3 MacBook Air Variants with 16GB RAM at No Extra Charge

# Apple Raises Base RAM in MacBook Air Models to 16GB

In a significant development for its MacBook Air series, Apple has elevated the base RAM in all its entry-level models from 8GB to 16GB, without any increase in price. This adjustment affects both M2 and M3 MacBook Air models and represents the first upgrade to base RAM in any of Apple’s Macs since 2012. The M2 MacBook Air continues to start at $999, while the 13-inch and 15-inch M3 variants are set at $1,099 and $1,299, respectively.

This enhancement falls within Apple’s larger strategy to safeguard the longevity of its devices, particularly as the company incorporates more AI-enabled features into its ecosystem. Although there are no current Apple Intelligence AI functionalities that specifically necessitate 16GB of RAM, the upgrade appears to be a proactive step to meet future demands, specifically in fields such as large language models and image-generation algorithms, which require extensive resources.

## A Long-Expected Improvement

Previously, users could opt for a RAM upgrade to 16GB for an additional $200. Now, the 16GB option is standard, with the maximum RAM allowance for these models remaining at 24GB. The cost to upgrade from 16GB to 24GB remains at $200, making the overall pricing structure more user-friendly for those needing extra memory for demanding tasks.

This modification addresses a frequent criticism of Apple’s laptops—insufficient base RAM. While 8GB of RAM suffices for basic functions, users involved in more demanding activities like video editing, software engineering, or multitasking across several applications will find the expanded memory advantageous. The additional RAM will also help future-proof the devices, ensuring they stay effective as software and user requirements evolve.

## Significance of the RAM Increase

The transition to 16GB of RAM is especially significant as Apple advances its integration of AI-driven capabilities into its products. Although current AI features, including Apple’s Intelligence suite, do not exceed 8GB of RAM requirements, coming advancements may challenge the existing hardware’s capabilities. For instance, initial beta iterations of Xcode’s Predictive Code Completion feature needed 16GB of RAM to operate effectively, although the finalized version was tuned to function with just 8GB.

This phenomenon isn’t exclusive to Apple. Microsoft has also established a baseline of 16GB RAM for its Copilot+ PCs, aimed at leveraging more AI capabilities through on-device processing rather than cloud computing. These PCs also demand a minimum of 256GB of storage and a neural processing unit (NPU) that adheres to particular performance criteria. With the 16GB RAM enhancement and Apple’s 16-core Neural Engine, the M2, M3, and forthcoming M4 Macs are well-equipped to fulfill similar specifications.

## Readying for the AI Future

Apple’s choice to augment the base RAM in its MacBook Air lineup likely mirrors the rising demands of AI and machine learning applications. As more AI features are woven into macOS and other Apple platforms, adequate RAM will be vital for seamless performance. On-device AI processing, which diminishes the reliance on cloud computations, necessitates increased memory to manage tasks like real-time image recognition, natural language processing, and predictive algorithms.

This initiative also aligns with Apple’s overarching strategy of embedding AI into its hardware and software ecosystem. The company’s Neural Engine, integrated into its M-series chips, already handles a broad spectrum of AI tasks. By raising the base RAM, Apple positions its devices to fully leverage these capabilities, both now and in the future.

## In Conclusion

Apple’s initiative to boost the base RAM in its MacBook Air models from 8GB to 16GB is a positive shift for users requiring more memory for intensive tasks. The upgrade enhances the versatility and future resilience of the laptops, particularly as Apple continues to integrate additional AI features into its offerings. With no price increase for the base models, this move renders the MacBook Air lineup even more enticing to a diverse array of users, from casual consumers to professionals.

As AI becomes increasingly pivotal in computing, having adequate RAM will be crucial for operating resource-heavy applications and ensuring fluid performance. Apple’s decision to raise the base RAM in its MacBook Air models signals the company’s preparation for the future of AI, enhancing the capabilities and versatility of its devices for years ahead.

Read More
“ULA Examines Anomaly with Vulcan Booster and Fairing Concerns”

# United Launch Alliance’s Atlas V and Vulcan Rockets: Payload Fairing Debris and Future Considerations

In September 2023, during a classified operation for the US Space Force and National Reconnaissance Office (NRO), United Launch Alliance (ULA) encountered an unanticipated challenge with its Atlas V rocket. As the rocket ascended into orbit, inadvertently aired footage revealed debris detaching from the rocket’s payload fairing—a protective covering meant to safeguard delicate spacecraft against the extreme conditions of launch. This occurrence has since prompted worries about the reliability of the payload fairing and its possible implications for upcoming missions.

## The Incident: Atlas V Payload Fairing Debris

The Atlas V rocket was tasked with deploying three classified reconnaissance satellites as part of a collaborative Space Force-NRO initiative called **Silent Barker**. These satellites, stationed in geosynchronous orbit over 22,000 miles from the Earth, act as guardians, keeping a watchful eye on potential threats to critical military and intelligence satellites. The Silent Barker mission is vital for preserving the United States’ space situational awareness, particularly given the increasing space endeavors by rivals like China and Russia.

During the launch, the payload fairing—consisting of two clamshell-like sections—was released as the rocket climbed. However, the live broadcast unveiled fragments of material, likely insulation from the interior of the fairing, detaching and dropping away. This debris sparked concerns regarding the possible hazards to the sensitive spacecraft contained within the fairing.

### The Importance of the Fairing

The payload fairing is essential for shielding spacecraft throughout the initial phases of launch. It protects the payload from aerodynamic pressures, extreme temperatures, and acoustic vibrations while the rocket ascends through the atmosphere. After reaching the vacuum of space, the fairing is no longer necessary and is discarded to decrease weight.

Any remnants from the fairing could present a risk to the spacecraft, possibly damaging delicate components or jeopardizing the mission. Nevertheless, neither the Space Force nor the NRO has acknowledged any harm to the Silent Barker satellites, and ULA has declared the mission a success.

## Ongoing Investigation and Preventive Measures

More than a year post-incident, ULA is still examining the cause of the debris. A spokesperson from ULA affirmed that the company is collaborating closely with its clients and suppliers to tackle the concern. “We have instituted some corrective measures and additional inspections of the hardware,” the spokesperson communicated, suggesting that ULA is actively working to avert similar problems in future launches.

### Previous Occurrences and Wider Issues

One source indicated that ULA has recorded fairing debris on at least one other classified Atlas V mission. Despite this, ULA and the Space Force have categorized all Atlas V launches, including the Silent Barker mission, as successful. However, the repetitive nature of this problem has led ULA to scrutinize its fairing design and production methods more thoroughly.

## The Vulcan Rocket: A New Chapter for ULA

As ULA gears up to transition from the Atlas V to its next-generation **Vulcan rocket**, the company is eager to ensure that the new vehicle avoids similar complications. The Vulcan rocket has already finished two successful test flights, and ULA has reported no concerns with payload fairing debris on either occasion.

Both the Atlas V and Vulcan rockets utilize composite payload fairings created by **Beyond Gravity**, a branch of the Swiss corporation RUAG. Beyond Gravity also produces fairings for Europe’s Ariane 6 and Vega rockets. While there are similarities in the manufacturing of the Vulcan and Atlas V fairings—such as the application of carbon-fiber layup techniques and low-pressure oven curing—there are also notable differences in their designs.

### Vulcan’s Booster Challenge

While the Vulcan rocket shows promise, it has encountered its own set of challenges. During the second Vulcan test flight in October 2024, the nozzle of one of the rocket’s strap-on solid-fuel boosters detached less than 40 seconds post-launch. Regardless, the rocket’s main engines compensated for the uneven thrust, enabling the mission to continue successfully.

This occurrence, alongside the ongoing investigation into the payload fairing debris, has prompted ULA and the Space Force to meticulously assess the Vulcan rocket’s preparedness for operational deployments. Despite the booster issue, Space Force representatives are hopeful that the Vulcan rocket will gain certification for national security launches by the year’s end.

## Looking Ahead: Securing the Future of Space Security

As ULA readies for the first operational flight of the Vulcan rocket, the company is focused on ensuring that all technical challenges are addressed. Col. James Horne, who oversees launch operations for Space Systems Command, remarked that the Space Force could certify the Vulcan rocket for national security missions “with open work” as long as there is confidence in the

Read More
“Blue Origin Reveals Stunning First Stage of New Glenn Rocket”

# Blue Origin’s New Glenn Rocket Progresses Towards Launch with Important Achievements

Blue Origin, the aerospace company established by Jeff Bezos, has made a crucial move towards the anticipated launch of its substantial *New Glenn* rocket. On Tuesday evening, the firm transported the first stage of the rocket to its launch location at Cape Canaveral, Florida. This event signifies a key advancement in the creation of the heavy-lift vehicle, which aims to compete with SpaceX’s *Falcon Heavy* and *Starship* rockets in the commercial aerospace arena.

## The Route to the Launch Location

Even though Blue Origin’s rocket manufacturing facility is situated only a few miles from Launch Complex 36 at Cape Canaveral Space Force Station, the massive size of the *New Glenn* rocket and its transporter necessitated a more roundabout journey. According to Blue Origin’s CEO, Dave Limp, the trip to the launch pad covered 23 miles.

The company has affectionately referred to its transporter as “GERT,” an abbreviation for “Giant Enormous Rocket Truck.” This specially designed vehicle is an engineering marvel in its own right, made up of two trailers joined by cradles and a strongback assembly. The transporter features 22 axles and 176 tires, and it is towed by an Oshkosh M1070, a repurposed U.S. Army tank transporter with a power output of 505 horsepower and 1,825 pound-feet of torque.

Measuring 310 feet (95 meters) in length and 23 feet (7 meters) in diameter, the *New Glenn* booster is too large to pass under standard bridges, requiring a meticulously planned route to reach the launch site.

## A Significant Achievement for Blue Origin

The transfer of the rocket to the launch location is a clear sign that the first stage of *New Glenn* is approaching its inaugural launch. Once it takes to the skies, *New Glenn* will mark the third commercial heavy-lift rocket available in the U.S. market, joining SpaceX’s *Falcon Heavy* and *Starship*. This progression highlights the increasing influence of commercial entities within the U.S. space sector, which has historically been dominated by federal agencies like NASA.

One notable feature of *New Glenn* is its fully reusable first stage, engineered to land on a droneship post-launch. This reusability aspect is essential for minimizing space mission costs and enhancing access to space.

However, prior to its launch, the rocket must complete two essential evaluations. The first is a “wet dress rehearsal,” where the rocket will be completely fueled, and its ground systems will be assessed. Subsequently, a hot-fire test will occur, igniting the seven BE-4 engines on the first stage for several seconds to confirm their proper function.

## The Path Forward: Final Adjustments

Just over a month ago, Blue Origin successfully concluded a 15-second hot-fire test of the *New Glenn* second stage. Powered by two BE-3U engines that utilize liquid oxygen and hydrogen to generate 173,000 pounds of thrust, this successful test was a vital advancement. Still, the most demanding phase lies ahead.

The forthcoming assessments will represent the first instance where the flight configurations of both the first and second stages are integrated and linked to the ground systems at Cape Canaveral. The complexity and size of the rocket suggest potential hurdles during this integration phase. As implied by Blue Origin’s transporter, GERT, these are intricate machines, and any slight complication could postpone the launch.

## Will *New Glenn* Launch in 2024?

Jeff Bezos is advocating for Blue Origin to initiate the launch of *New Glenn* by the close of 2024, but the timing is rapidly becoming constrained. The company originally aimed to launch a small payload heading to Mars for NASA, dubbed ESCAPADE, in October, but that launch has already faced delays.

To gauge the timeline, we can reference SpaceX’s *Falcon Heavy* case. SpaceX transported the *Falcon Heavy* to the launch pad for the first time on December 28, 2017. The rocket underwent a hot-fire test on January 24, 2018, and accomplished its launch on February 6, 2018. This entire timeline spanned 40 days.

While *New Glenn* is a distinct vehicle, and Blue Origin lacks the operational experience of SpaceX, this timeline indicates that a launch in early to mid-December might be possible. Nevertheless, any unexpected complications during the concluding tests could delay the launch until early 2025.

## The Future of Commercial Space Exploration

The launch of *New Glenn* will represent a significant milestone not only for Blue Origin but also for the broader commercial space sector. With *New Glenn* entering the U.S. heavy-lift rocket market, competition is set to escalate, potentially leading to lowered costs.

Read More
Alphabet’s Earnings Jump 34% Fueled by AI and Cloud Expansion

# Google’s Cloud Sector: Expansion Amid Intense Rivalry

Alphabet, Google’s parent firm, has recently announced a notable 34% increase in profits for the third quarter of 2023, fueled by significant growth in its cloud sector. With rising demand for computing and data services, especially those essential for training and operating generative artificial intelligence (AI) models, Google Cloud has established itself as an important component of Alphabet’s offerings. Nevertheless, Google Cloud lags far behind Microsoft Azure and Amazon Web Services (AWS), positioned as the third player in the cloud computing landscape.

## Impressive Financial Results for Q3 2023

Alphabet’s earnings for Q3 2023 surpassed predictions, with net income soaring to $26.3 billion, an increase from $19.7 billion during the same quarter the previous year. Revenue grew by 15%, climbing to $88.3 billion, exceeding analysts’ forecasts of $86.3 billion. Google Cloud stood out in Alphabet’s lineup, with a 35% revenue boost, reaching $11.4 billion. Notably, the operating profit of Google Cloud skyrocketed sevenfold, jumping from $266 million in Q3 2022 to $1.9 billion in Q3 2023.

Sundar Pichai, the CEO of Alphabet, credited the firm’s achievements to its sustained commitment to AI. “Our long-term focus and investment in AI

Read More
The Extraordinary Narrative Behind the Excitement and Anxieties Connected to the Supposed End of Contemporary Cryptography

### The Quantum Computing Excitement: Why the Recent “Advancement” in Cryptography Isn’t as Impressive as It Appears

Quantum computing is frequently acclaimed as the next technological frontier, boasting the ability to transform various sectors ranging from pharmaceuticals to AI. Nonetheless, one of the most prominent discussions surrounding quantum computing is its potential to compromise contemporary cryptographic systems that currently safeguard everything from online banking to military communications. Although quantum computers are still in their early stages, the mere prospect of their future capabilities has ignited a surge of sensational headlines and overstated claims about the impending collapse of cryptography.

Recently, yet another instance of this excitement cycle has emerged, with reports from Chinese researchers announcing a “breakthrough” in quantum computing that could endanger military-grade encryption. However, as is often the case with such assertions, the reality is significantly less impactful than the newspapers suggest.

#### The Most Recent “Advancement” in Perspective

Three weeks ago, the *South China Morning Post* conveyed that scientists from Shanghai University had achieved a noteworthy quantum computing advancement. Per the article, the researchers utilized a quantum computer to challenge encryption algorithms based on substitution-permutation networks (SPNs), a framework leveraged in numerous contemporary cryptographic systems. The researchers asserted that this was the first instance of a quantum computer posing a “real and substantial threat” to such encryption techniques.

Nevertheless, the article did not reference the original research paper, and follow-up analyses indicated that the claims were not as revolutionary as initially portrayed. The research paper, published in the *Chinese Journal of Computers*, did not aim to break commonly used encryption algorithms like RSA or AES but rather examined academic block ciphers such as PRESENT, GIFT-64, and RECTANGLE. These lightweight encryption techniques are devised for constrained settings like embedded systems and are not extensively used in critical applications.

#### What the Research Truly Displays

The researchers employed a D-Wave quantum annealer—a kind of quantum computer tailored for resolving specific optimization challenges—to identify integral distinguishers within these SPN-based algorithms. Integral distinguishers are mathematical constructs applied in cryptanalysis to undermine encryption frameworks. However, discovering these distinguishers is nothing new; conventional computing methods have been successful at this task for years.

Fundamentally, the researchers revealed that quantum annealing could parallel the efficacy of traditional mathematical techniques in identifying these distinguishers. While this result is indeed intriguing, it does not signify a breakthrough in breaking widely used encryption methods. As David Jao, an expert in post-quantum cryptography (PQC) at the University of Waterloo, aptly stated: “It’s akin to developing a new technique for lock-picking. The outcome remains unchanged, but the technique is novel.”

#### The Function of Quantum Annealing

Quantum annealing, the approach utilized in this study, is a specialized variety of quantum computing that excels at tackling optimization challenges. It is not synonymous with the more versatile quantum computing that might eventually break encryption algorithms like RSA. D-Wave, the firm behind the quantum annealer employed in this research, has been manufacturing commercial quantum annealers since 2011; however, these systems are restricted in scope and lack the capacity to solve all types of quantum challenges.

The D-Wave Advantage system engaged in the research boasts 5,000 qubits, yet these qubits do not directly equate to those in general-purpose quantum computers. Additionally, the optimization challenges tackled by quantum annealers often need partitioning into smaller sub-challenges, constraining their applicability for extensive cryptographic assaults.

#### The Genuine Menace to Cryptography

Although this latest study does not present an immediate danger to established encryption algorithms, the overarching concern regarding quantum computing’s effect on cryptography remains valid. Once they become fully developed, quantum computers could indeed compromise many of the cryptographic frameworks currently in use. The most susceptible are asymmetric encryption algorithms such as RSA and ECC (Elliptic Curve Cryptography), which depend on the difficulty of factoring large numbers or resolving discrete logarithm problems—tasks that quantum computers could theoretically execute significantly faster than their classical counterparts.

Conversely, symmetric encryption algorithms like AES (Advanced Encryption Standard) are generally deemed secure against quantum threats, as long as key sizes are adequate. For instance, AES-256 is broadly regarded as being resistant to quantum computing assaults.

#### The Significance of Post-Quantum Cryptography

The domain of post-quantum cryptography (PQC) concentrates on crafting new cryptographic algorithms capable of enduring assaults from quantum computers. The U.S. National Institute of Standards and Technology (NIST) has spearheaded an initiative to standardize PQC algorithms, and several candidates are presently under review for widespread implementation.

While quantum computers capable of breaching RSA or AES remain decades in the future, it is vital for industries and governments to commence the transition to quantum-resistant algorithms. The process of substituting existing cryptographic systems is intricate and time-consuming, thus early action is essential to guarantee a seamless transition.

#### Caution Against the Hype

The recent media portrayal of

Read More
“Ars Redesign 9.0.2 Update Brings Much Anticipated Text Personalization Options”

# Ars Technica Redesign 9.0.2: Enhanced Text Controls and Personalization Features

In a time where user experience is crucial, Ars Technica has reaffirmed its dedication to providing an exceptional reading environment with the launch of **Ars Technica Redesign 9.0.2**. This update, guided by user insights, aims to equip readers with greater control over their content consumption, particularly in terms of text customization. Whether you’re a casual visitor or a devoted member, this update guarantees to refine your browsing experience by introducing enhanced flexibility and personalization.

## Highlighted Features of the Redesign

### 1. **Font Size Selector**
One of the most sought-after features is the capacity to modify the font size. With the revamped design, Ars Technica presents a **font size selector** that enables users to select from **Small, Standard, and Large** font sizes. Whether you lean towards a more compact reading layout or prefer larger text for enhanced legibility, the decision is now at your fingertips.

– **Small**: For users who enjoy a more condensed view, maximizing screen space for content.
– **Standard**: The default size, fine-tuned for a harmonious reading experience.
– **Large**: Perfect for readers seeking a more spacious and comfortable text size without straining their vision.

### 2. **Hyperlink Color Personalization**
Ars Technica has always had an eye for detail, and now, readers can modify the hue of hyperlinks in articles. The latest update lets you revert to the classic **orange hyperlinks** or maintain the default color scheme. This minor yet significant modification allows users to tailor their visual experience according to their preferences.

### 3. **Expanded Mode for Subscribers**
Supporters of Ars Technica who choose the **$25/year subscription** can access an exclusive feature: **Wide Mode**. This capability allows for an expanded column layout, increasing the information density on the page. This is especially beneficial for readers who appreciate fewer line breaks and more visible content simultaneously.

– **Standard Width**: The usual width, appropriate for the majority of users.
– **Wide Width**: An exclusive feature for subscribers that broadens the text column, providing a more immersive reading adventure.

### 4. **Compact Headlines and Enhanced Breakpoints**
Beyond text customization, the redesign also offers **more compact headlines** and **enhanced breakpoints** for responsive design. This ensures that the layout adjusts more fluidly across various devices, guaranteeing that the content is always presented in the most effective manner, whether you’re on a desktop, tablet, or smartphone.

### 5. **Story Intro Image Enlargement**
Another notable feature is the ability to **enlarge story intro images**. If you enjoy visuals alongside written content, this function will enable you to zoom in on images for a more detailed view, enriching the storytelling experience.

### 6. **Collapsible Text Settings Panel**
After establishing your preferred text settings, you can now **collapse the text settings panel** into the page navigation area. This keeps the customization options constantly accessible without being obtrusive, letting you concentrate on the content without distractions.

### 7. **Cross-Device Customization**
Text settings are saved in your browser rather than your account, allowing you to set your preferences on every device you use. Whether you’re reading on your desktop at work or your tablet at home, your configurations will remain uniform across all devices.

## What’s Ahead?

Ars Technica is just getting started. The development team is currently working on the next set of enhancements, which will further optimize the user experience. Here are some exciting features to anticipate:

### 1. **True Light Mode**
For those who favor a lighter interface, Ars Technica will soon roll out a **true light mode**. This mode will eliminate dark background elements, providing a more inviting, brighter reading experience.

### 2. **Front-Page Notifications Enhancements**
Improvements to front-page notifications are also underway. Soon, you’ll be able to view more elaborate activity in threads where you’ve participated, making it easier to remain engaged with ongoing conversations.

### 3. **Revamped Comments and Voting Mechanism**
A **revamp of the front-page comments and voting mechanism** is also on the way. This update will introduce more nuanced features, enabling users to interact with content and discussions in a more profound manner.

## The Importance of These Changes

Ars Technica has consistently taken pride in serving an astute audience—readers who appreciate **information density**, **readability**, and **user experience**. The 9.0.2 redesign exemplifies this dedication, offering features that not only enhance the site’s visual appeal but also elevate its overall functionality.

For subscribers, the **ad-free experience** combined with the **Wide Mode** option provides a clutter-free reading environment that improves engagement and satisfaction.

Read More
GitHub Copilot Enhances AI Features with Compatibility for Claude 3.5 and Gemini Models

# Microsoft and GitHub’s Multi-Model AI Strategy: Implications for the Future of AI Tools

In a noteworthy transformation within the AI realm, GitHub CEO Thomas Dohmke recently revealed that GitHub Copilot, the widely utilized AI-driven coding helper, will shift from solely relying on OpenAI’s GPT models to a multi-model framework. This transition is set to take place over the next few weeks, heralding a new phase in the development of AI-fueled coding instruments. This decision has ignited discussions regarding whether Microsoft, GitHub’s parent organization, will implement a similar tactic for its diverse AI offerings, including Microsoft Copilot.

## GitHub Copilot’s Shift to Multi-Model

Since its introduction in 2021, GitHub Copilot has revolutionized the work of developers, and it will now empower users with the option to toggle between various AI models. Initially, this will feature Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, in addition to OpenAI’s GPT models. This multi-model approach will enable developers to customize the AI’s functionalities according to their individual requirements, whether they are engaging with different programming languages or addressing a variety of tasks.

Dohmke stated that adopting a multi-model strategy is crucial because “there is no single model that excels in every situation.” Different models perform better on varying tasks, and developers will now have the opportunity to select the most appropriate model for their specific application. For instance, some models may be more capable of advanced reasoning tasks, while others could be superior in certain programming dialects.

### Key Features of the Multi-Model Strategy

– **Model Versatility**: Developers can transition between models, even during a conversation, facilitating a more tailored experience. Organizations will also have oversight on which models are accessible to their teams.
– **Enhanced Model Support**: GitHub is set to introduce additional OpenAI models shortly, such as GPT o1-preview and o1-mini, which are engineered to tackle more intricate reasoning tasks than the commonly used GPT-4.
– **Wider Integration**: The multi-model capability will first be accessible in Copilot Chat’s web and VS Code environments, but GitHub intends to broaden this feature to other aspects, including Copilot Workspace, multi-file editing, code review, security autofix, and the command-line interface (CLI).

These developments are reflective of a larger trend within the AI sector, where both developers and organizations increasingly pursue more specialized and adaptable AI solutions.

## GitHub Spark: Simplified App Development with Natural Language

Alongside the updates to Copilot, GitHub also introduced **GitHub Spark**, a natural language initiative aimed at making app development more user-friendly. Spark enables individuals without coding skills to construct simple applications through natural language queries, while seasoned developers can modify the underlying code as required. This conversational method of app development allows users to easily iterate and contrast various iterations of their applications.

Similar to Copilot, GitHub Spark will leverage multiple AI models from OpenAI, Google, and Anthropic, further emphasizing GitHub’s commitment to a multi-model future. While GitHub Spark remains in its early preview stage, those interested can join a waitlist for access.

## Is Microsoft Copilot Next in Line?

The transition to a multi-model strategy by GitHub has led to conjecture about whether Microsoft might adopt a comparable approach for its other AI products, specifically Microsoft Copilot. Microsoft Copilot, which is integrated across various Microsoft applications like Word, Excel, and Outlook, has so far depended on OpenAI’s GPT models. However, the achievement of GitHub Copilot’s multi-model framework may encourage Microsoft to reassess its AI approach.

### Justifications for a Multi-Model Microsoft Copilot

There are multiple factors that could motivate Microsoft to consider a multi-model approach for its broader array of AI tools:

– **Varied User Requirements**: While developers benefit from diverse models tailored to different programming languages, non-developers utilizing Microsoft Copilot in productivity software like Word or Excel may also gain from specialized models. For example, some models could generate business reports more effectively, while others might thrive in creative writing or data analysis.
– **Increasing Competition**: Competitors of Microsoft, such as Google and Apple, are also exploring multi-model initiatives. Apple, for instance, aims to integrate OpenAI’s ChatGPT into iOS 18.2, with plans to allow switching to other models like Google’s Gemini down the line. As the competitive landscape in AI intensifies, Microsoft may need to provide similar adaptability to remain in a leading position.
– **Recent Strains with OpenAI**: Microsoft’s strong alliance with OpenAI has been a foundational part of its AI strategy. However, recent reports have indicated that frustration has arisen regarding internal disruptions at OpenAI, particularly concerning AI safety issues. A multi-model approach could afford Microsoft greater flexibility and lessen its dependency on a singular AI supplier.

### Difficulties and Considerations

Despite the prospective advantages, implementing a multi-model approach for Microsoft Copilot also presents challenges. In contrast to GitHub Copilot

Read More