Tag: Source: Arstechnica.com

“Nvidia GeForce RTX 5090: An Ultra-Fast GPU Priced Similar to an Entire Gaming Rig”

**Nvidia GeForce RTX 5090: An Exceptional GPU for the Ultimate Gamer and Creator**

The Nvidia GeForce RTX 5090 is here, creating excitement in both gaming and technology sectors. With a jaw-dropping price of $1,999, this GPU is certainly not aimed at those with shallow pockets. Featuring innovative technologies, unmatched performance, and a power consumption that could compete with a small appliance, the RTX 5090 is a premium product crafted for individuals who seek nothing but the finest. But does it justify the price? Let’s explore further.

### **The Cost of Excellence**
Nvidia’s flagship GPU, the RTX 5090, comes with a price tag that reflects its premium status. At $1,999, it exceeds the cost of numerous complete gaming setups. While it isn’t Nvidia’s priciest GPU to date (that honor goes to the $2,499 Titan RTX released in 2018), the 5090 is clearly tailored for a select group of users. This is a card designed for dedicated enthusiasts, industry professionals, and affluent gamers who prioritize top-tier performance.

For comparison, one could assemble a high-performance gaming PC using Nvidia’s next-tier GPU, the $999 RTX 5080, for a similar or even lower cost with strategic component choices. Nevertheless, for those seeking the utmost in quality, the RTX 5090 is unbeatable.

### **Performance: Redefining Standards**
The RTX 5090 marks the first consumer GPU to eclipse the performance of the RTX 4090, Nvidia’s prior flagship model. It showcases a 30-40% boost in performance over the 4090 across most gaming benchmarks, facilitated by several pivotal enhancements:

– **CUDA Cores:** An increase from 16,384 in the 4090 to 21,760 in the 5090.
– **Memory:** 32GB of GDDR7 memory with a 512-bit interface, providing an impressive 1,792 GB/s bandwidth.
– **Architecture:** The newly designed Blackwell GPU architecture bolstering both efficiency and capability.

These advancements position the RTX 5090 as a powerhouse for 4K gaming, ray tracing, and demanding creative tasks. In titles such as *Cyberpunk 2077* with ray tracing activated, the 5090 ensures exceptionally smooth performance, even at maximum settings.

### **DLSS 4 and Multi-Frame Generation**
A highlight of the RTX 5090 is DLSS 4, introducing Multi-Frame Generation (MFG). This innovative technology can produce up to three interpolated frames for each rendered frame, leading to a considerable increase in frame rates. Nvidia asserts that with MFG, 15 out of every 16 pixels displayed on your screen can be AI-generated.

However, MFG does have its limitations. It works best when the base frame rate is sufficiently high, rendering it less effective for elevating lower frame rates to playable levels. Moreover, visual artifacts and input lag may become more apparent, particularly in fast-paced gaming scenarios.

### **Design and Cooling: Sleeker and Smarter**
Nvidia has implemented several intentional design modifications in the RTX 5090 Founders Edition:

– **Compact Design:** Unlike the 4090, which required a three-slot space, the 5090 fits neatly into a two-slot configuration, enhancing compatibility with smaller PC cases.
– **Recessed Power Connector:** The 12VHPWR connector has been angled and recessed to minimize cable clutter while improving case compatibility.
– **Enhanced Cooling Solutions:** Both fans are now situated on the same side of the GPU, addressing thermal management concerns in ITX cases.

Despite these enhancements, the RTX 5090 operates at higher temperatures than its predecessor, peaking around 77°C under load. While this remains within safe operating ranges, adequate case airflow is crucial.

### **Power Usage: A Compromise**
The RTX 5090 is indeed a power-hungry component, boasting a Total Graphics Power (TGP) of 575W—125W over the 4090. This elevated power consumption results in greater electricity expenses and necessitates a robust power supply (Nvidia recommends a minimum of 1,000W). Nevertheless, enthusiasts can reduce this impact by undervolting the card, as initial tests indicate notable efficiency improvements with little to no decline in performance.

### **Testing Setup**
For the RTX 5090 evaluation, Nvidia utilized a premium gaming configuration that included:

– **CPU:** AMD Ryzen 7 9800X3D
– **Motherboard:** Asus ROG Crosshair X670E Hero
– **RAM:** 32GB G.Skill Trident Z5 Neo DDR5-6000
– **Power Supply:** Thermaltake Toughpower GF A3 1050W

This arrangement ensures that the GPU can perform without bottlenecks, allowing it to reveal its complete capabilities.

### **The Pros, the Cons, and the Drawbacks**
#### **The Pros**
– Substantial

Read More
“After an Ars investigation, ISP Noncompliance with New York’s $15 Broadband Law Tackled”

### Optimum Under Fire for Noncompliance with New York’s Affordable Broadband Law

New York’s Affordable Broadband Act (ABA), which requires Internet Service Providers (ISPs) to provide low-cost broadband options to low-income residents, has faced pushback and misunderstandings from certain providers. A recent incident involving Optimum, an ISP owned by Altice USA, underscores the difficulties in enforcing the legislation and ensuring that providers adhere to its requirements.

#### Overview of the Law

The New York Affordable Broadband Act, effective January 15, 2025, obligates ISPs with more than 20,000 customers to offer affordable broadband plans to eligible low-income residents. The law outlines two specific plans:
– A $15 monthly option featuring download speeds of at least 25Mbps.
– A $20 monthly option with download speeds of at least 200Mbps.

These rates must encompass all recurring taxes, fees, and equipment rental charges. Eligibility is based on enrollment in programs like the Supplemental Nutrition Assistance Program (SNAP).

#### William O’Brien’s Compliance Challenges

William O’Brien, a low-income New Yorker on SNAP, sought to switch to Optimum’s $15 broadband plan after becoming aware of the law. At that juncture, O’Brien was paying $111.20 monthly for his broadband service, which included $89.99 for the service, $14 for equipment rental, a $6 “Network Enhancement Fee,” and $1.21 in taxes. Despite meeting the eligibility requirements, his requests were twice denied.

During his first attempt to change plans, O’Brien encountered misinformation. A customer service representative informed him that Optimum did not have a $15 plan, even after he provided them with a link to an article explaining the law. Frustrated, O’Brien contacted Ars Technica, a technology news organization, for assistance.

#### Media Involvement and Continued Pushback

After Ars Technica reached out to Optimum’s public relations department, the company acknowledged its error and vowed to rectify the situation. However, when an executive customer relations representative contacted O’Brien, he was informed that the low-income plan was exclusively available to new customers—a condition that the New York law does not allow.

O’Brien’s determination, coupled with ongoing media scrutiny, ultimately led to a resolution. Optimum eventually permitted him to transition to the $15 plan, significantly lowering his monthly bill by nearly $100. However, the plan provided only 50Mbps download and 5Mbps upload speeds, a reduction from his previous 100Mbps service.

#### Systemic Problems and Lack of Readiness

Optimum’s approach to O’Brien’s situation exposed systemic flaws in the company’s application of the law. The company’s website and internal policies initially precluded existing customers from accessing the low-income plan, clearly breaching the ABA. Optimum later acknowledged “confusion among our care teams” and revised its materials and training to align with the law.

The broader implementation of the law has also encountered obstacles. Optimum claimed that the limited notice from state officials provided insufficient time to update its systems and materials. This lack of readiness highlights the challenges associated with enforcing new regulations, particularly in sectors with intricate customer service frameworks.

#### Wider Consequences and Enforcement Issues

Since its introduction, the New York Affordable Broadband Act has met resistance from ISPs. Initially, broadband lobbying organizations successfully blocked the law in 2021, but a U.S. appeals court overturned that decision in 2024. The Supreme Court chose not to hear the case, allowing the law to become effective.

Some ISPs, such as AT&T, have opted to avoid the law entirely. AT&T ceased its 5G home Internet service in New York rather than conform to the affordability mandates. This raises concerns regarding the law’s efficacy and the readiness of ISPs to prioritize consumer access over profit margins.

#### Future Directions

Optimum has committed to promoting its low-income plan more widely and engaging in outreach within low-income communities. However, the company has yet to introduce a $20 plan with 200Mbps speeds, as the law allows. O’Brien, for his part, intends to file a complaint with the New York Attorney General’s office in hopes of obtaining a faster plan.

New York Attorney General Letitia James holds the power to enforce the law and can impose civil penalties of up to $1,000 per infringement. Nonetheless, it remains uncertain how vigorously the state will pursue enforcement or respond to complaints from residents like O’Brien.

#### Conclusion

The situation involving William O’Brien and Optimum underscores the difficulties in applying and enforcing consumer protection regulations within the broadband sector. While the Affordable Broadband Act marks a pivotal advance towards closing the digital divide, its effectiveness is contingent on strong enforcement and the willingness of ISPs to follow through. For low-income residents, the law brings hope for affordable Internet access—but this hope is dependent on providers like Optimum fulfilling their responsibilities.

Read More
“Reevaluating Blood Pressure Assessments: Are We Approaching It Wrongly?”

**Blood Pressure Measurements in a Supine Position: Could They Offer Better Insights into Heart Risks?**

Monitoring blood pressure (BP) is fundamental to managing cardiovascular health, yet recent studies indicate that the conventional approach of measuring seated BP may not effectively evaluate heart risks. A pioneering research published in *JAMA Cardiology* by scholars at Harvard shows that blood pressure readings taken in a lying down position (supine) are substantially more predictive of cardiovascular disease, stroke, heart failure, and mortality than those taken while seated.

This finding questions established medical routines and could significantly alter how blood pressure is assessed in clinical and home environments.

### **Current Best Practices for Blood Pressure Assessment**

Historically, blood pressure is gauged with the patient in a seated position. The procedure for seated BP readings is very detailed and comprises several steps to guarantee precision:

1. Patients should refrain from eating, drinking, or exercising for 30 minutes before the reading.
2. An empty bladder is recommended.
3. They should remain calm for five minutes prior to the measurement.
4. Feet must be flat on the ground, with legs uncrossed.
5. The arm being assessed should be positioned on a flat surface at heart level.

Despite these comprehensive protocols, the execution in numerous medical facilities often does not meet these criteria. The new research indicates that even with optimal seated BP readings, they may still be less effective than supine readings in forecasting cardiovascular dangers.

### **Noteworthy Discoveries from the Research**

The study examined data from the Atherosclerosis Risk in Communities (ARIC) investigation, which commenced in 1987 and includes over thirty years of follow-up information from 11,369 participants. The outcomes were remarkable:

– **Indicators of Increased Risk:** Individuals with elevated BP readings while lying down yet normal values while seated had notably higher chances of cardiovascular incidents:
– 53% elevated risk of coronary heart disease.
– 51% increased risk of heart failure.
– 62% higher risk of stroke.
– 78% greater risk of fatal coronary heart disease.
– 34% elevated risk of overall mortality.

– **Comparison of Seated and Supine Readings:** Conversely, individuals with high BP readings solely while seated (and normal values while lying down) showed no significant increase in risk for coronary heart disease, heart failure, or stroke. Notable exceptions included a 41% higher risk of fatal coronary heart disease and an 11% higher risk of overall mortality—both lower than the risks linked to high supine BP readings.

– **Highest Risk Group:** Those exhibiting high BP readings in both positions encountered the most substantial risks across all cardiovascular metrics.

These insights imply that supine BP measurements may reveal “hidden” hypertension that seated readings overlook, offering a clearer understanding of cardiovascular risk.

### **Reasons Why Lying-Down Blood Pressure Might Be More Informative**

While the precise reasons behind the enhanced accuracy of supine BP readings remain uncertain, researchers have proposed various theories:

1. **Reflecting True Resting BP:** Lying down could yield a more precise evaluation of true resting blood pressure, which seated measurements attempt to capture but may not completely reflect.
2. **Underlying Physiological Factors:** The factors contributing to high BP in the supine position may be more intimately related to cardiovascular risks.
3. **Effects on Heart and Brain:** Elevated BP in the lying down position might exert greater pressure on the heart and brain compared to when seated.

Curiously, these findings are consistent with earlier studies indicating that high BP at night—when individuals are typically lying down—is strongly correlated with cardiovascular risks. Generally, blood pressure decreases during sleep, but those who maintain high levels at night encounter heightened risks.

### **Connotations for Clinical Practice and At-Home Monitoring**

The authors of the study propose that measuring blood pressure in a lying down position could serve as an essential method for identifying elevated BP and underlying cardiovascular disease (CVD) risk. However, they warn that these findings are preliminary and must undergo validation through more extensive research. Clinical trials will be required to establish whether managing supine BP with medications is more efficient in mitigating cardiovascular risks than focusing on seated BP readings.

At this moment, the study poses significant inquiries regarding the methodology of BP measurement in medical environments. Should supine BP readings be integrated as a regular component of routine check-ups? If so, how can this be effectively executed without hindering clinical operations?

### **Implications for Patients**

For those monitoring their blood pressure at home, it might be beneficial to compare readings taken in seated and lying down positions. In the ARIC study, supine BP measurements were conducted after participants rested in a lying down position for 20 minutes. Readings were recorded multiple times over a two-minute timeframe to guarantee precision. While this meticulous method may not be feasible for daily home monitoring, it underscores the significance of consistency and appropriate technique.

### **Study Strengths**

Read More
“Researchers Improve Molecular Simulations through Quantum Computing Methods”

### A Revolutionary Step in Quantum Computing: Modeling Electrons in Tiny Molecules

Quantum computing has been widely recognized as the pioneering frontier in computational advancements, with the potential to address challenges that classical computers cannot handle. One of the most thrilling uses of quantum computing is found in chemistry, specifically in modeling the behavior of electrons in small molecules such as catalysts. A recent advancement, featured in *Nature Physics*, has unveiled a groundbreaking method that could greatly streamline these simulations, moving us closer to tangible quantum computing applications.

### The Potential of Quantum Computing in Chemistry

A commonly raised inquiry regarding quantum computing is: *When will it become beneficial?* The answer hinges on the specific issue being considered. While certain uses, like cryptography and optimization, necessitate significant progress in quantum technology, others, such as modeling quantum systems, are more readily achievable in the short term.

Catalysts, essential for enhancing chemical reactions, are ideal candidates for quantum simulations. The electron behavior within these molecules, governed by the principles of quantum mechanics, is notoriously complicated to replicate using classical computers. This complexity arises due to the interactions among electrons, especially those with unpaired spins, which become computationally unmanageable as the system’s complexity increases.

Nonetheless, quantum computers are exceptionally suited for this challenge. By exploiting their capacity to directly simulate quantum systems, they can yield insights into catalyst behavior that are beyond the reach of classical methods.

### The Difficulty of Modeling Electron Behavior

The actions of a catalyst’s electrons are determined by two primary factors: the orbital they occupy and their spin (a quantum property that can be oriented “up” or “down”). While spins of paired electrons in the same orbital cancel each other out, unpaired electrons possess “exposed” spins that engage with surrounding electrons in the molecule. These interactions ultimately dictate the molecule’s energy states and chemical characteristics.

Modeling these interactions using a quantum computer requires the assignment of the molecule’s quantum attributes to qubits. However, this mapping is computationally demanding, needing a substantial number of qubits and an extensive series of quantum operations (gates). The current error rates associated with quantum hardware further complicate the execution, presenting challenges in obtaining precise outcomes.

### An Innovation in Quantum Simulation

Researchers from Berkeley and Harvard have devised a new strategy to enhance the efficiency of these simulations. The approach initiates with classical computers streamlining the issue by concentrating on the most pertinent features of the catalyst’s behavior—particularly, the unpaired spins at low energy states. This simplification lessens the system’s complexity, making it more feasible for quantum technology.

The simplified model is subsequently translated onto a quantum processor. In contrast to conventional quantum algorithms that solely depend on one- and two-qubit gates, this method capitalizes on quantum computers utilizing neutral atoms. These setups facilitate multi-qubit gates, allowing groups of qubits to execute operations collectively. This breakthrough substantially decreases the required number of gates, resulting in quicker and more error-resistant simulations.

### A Case Analysis: Photosynthesis Catalyst Mn₄O₅Ca

To validate their method, the researchers modeled the behavior of Mn₄O₅Ca, a molecule vital to photosynthesis. By calculating the “spin ladder”—the lowest-energy states of the molecule’s electrons—they successfully identified the wavelengths of light the molecule can absorb or emit. This data is essential for comprehending its function in photosynthesis and could have wider implications for the engineering of artificial catalysts.

### The Path Forward: Immediate Applications

Although error rates in existing quantum computers remain a prohibitive factor, the efficacy of this innovative approach signifies that only modest enhancements in hardware are required to make it practical. The researchers pointed out that the algorithm’s resource necessities—like the count of measurements and the maximum time for evolution—are well within reach of next-generation quantum devices.

This advancement showcases the distinctive capabilities of quantum computers. Unlike classical systems, which are bound to standard algorithms, quantum computers can directly simulate other quantum systems. This development opens new avenues for addressing intricate issues in chemistry, materials science, and beyond.

### Implications for the Future of Quantum Computing

The importance of this research transcends the specific application of modeling catalysts. It emphasizes the flexibility of quantum computers and their potential to solve problems in innovative ways that were once thought impossible. As technological improvements continue, we can anticipate the emergence of more groundbreaking algorithms that extend the limits of quantum computing’s potential.

In many respects, we are merely beginning to explore the full spectrum of quantum computing’s capabilities. This recent breakthrough serves as a powerful reminder that the field is progressing rapidly, with practical applications on the horizon. As investigations into the unique traits of quantum systems continue, the pertinent question shifts from *if* quantum computers will be beneficial, to *how soon* they can deliver results.

### Conclusion

The creation of efficient quantum algorithms for simulating electron behavior in small molecules signifies a notable advance in the pursuit of practical quantum computing. By harnessing the unique

Read More
Vulnerabilities Leave Millions of Subaru Cars Open to Remote Access and Tracking Threats

### Subaru’s Starlink Vulnerability: A Cautionary Tale for Automotive Cybersecurity and Privacy

In a time when vehicles are increasingly reliant on connectivity, the recent discovery of security flaws in Subaru’s Starlink system acts as a significant warning about the dangers tied to contemporary automotive technology. Security experts Sam Curry and Shubham Shah identified weaknesses in Subaru’s web portal that permitted unauthorized access to vital vehicle functions and sensitive location information. Their findings not only reveal the potential for harmful exploitation but also raise serious questions regarding data privacy within the automotive sector.

### **The Discovery: A Thanksgiving Experiment Gone Wrong**

The investigation commenced when Curry, an experienced security analyst, bought a 2023 Subaru Impreza for his mother, planning to examine its connected features for vulnerabilities afterward. During Thanksgiving in November 2024, Curry and Shah explored the vehicle’s Internet-enabled Starlink system. What they uncovered was concerning: vulnerabilities in Subaru’s employee portal allowed them to remotely manage various vehicle functionalities, such as unlocking doors, honking the horn, and even starting the ignition.

Even more troubling, the researchers were able to access a year’s worth of the vehicle’s location history. This comprehensive data unveiled personal details such as medical appointments, social events, and specific parking locations. The ramifications of such access are significant, as Curry remarked, “Whether someone is cheating on their spouse, having an abortion, or involved with a political organization, there are countless situations in which this could be weaponized against an individual.”

### **How the Hack Worked**

The researchers located the vulnerability within Subaru’s administrative domain, SubaruCS.com, utilized by employees to handle Starlink accounts. They found that they could reset employee passwords merely by guessing their email addresses. Although the system demanded answers to security questions, these protections were placed within the user’s browser instead of on Subaru’s servers, making them easily circumvented.

By employing this strategy, Curry and Shah accessed an employee’s account and discovered they could search for any Subaru owner using basic details like a last name, zip code, or license plate number. After locating an owner, they could transfer control of the vehicle’s Starlink features to any device, effectively commandeering the car’s connected functions.

### **The Privacy Problem: More Than Just a Security Issue**

While Subaru promptly addressed the vulnerabilities upon notification, the event highlights a more extensive problem: the substantial accumulation and retention of location data by automakers. Subaru’s system enabled employees to access detailed location histories for at least one year, prompting inquiries about how this information is stored, who has access to it, and for what purposes it is utilized.

Subaru defended its practices by asserting that location data is used to aid first responders during emergencies, such as detecting collisions. Nevertheless, Curry emphasized that such capabilities do not necessitate a year’s worth of location history. The company failed to clarify how long it retains this data or the measures taken to secure it.

This lack of transparency isn’t exclusive to Subaru. A 2024 report from the Mozilla Foundation characterized modern vehicles as “a privacy nightmare,” highlighting that 92% of car brands offer little to no control over the data collected, and 84% maintain the right to sell or share this information. Subaru contended that it does not sell location data, yet the overarching industry trend remains concerning.

### **The Broader Implications for the Automotive Industry**

The Subaru incident represents just one example among a growing number of automotive cybersecurity issues. Over the preceding two years, researchers have pinpointed similar vulnerabilities in vehicles from Acura, BMW, Honda, Hyundai, Kia, Mercedes-Benz, Toyota, and others. These weaknesses often arise from inadequately secured web portals and APIs, which are increasingly utilized to manage connected vehicle features.

What differentiates Subaru’s case is the granularity of the accessible location data. This raises serious privacy issues, as vehicles evolve into “data-hungry machines,” amassing extensive information about their owners’ movements and behaviors. The potential for abuse—whether by malicious hackers, rogue employees, or even governments—should not be underestimated.

### **The Need for Stricter Regulations and Better Practices**

The Subaru incident underscores the pressing necessity for stricter regulations and improved cybersecurity practices within the automotive arena. While Subaru has addressed the specific vulnerabilities highlighted by Curry and Shah, the broader privacy concerns persist. As Robert Herrell, executive director of the Consumer Federation of California, pointed out, “Individuals are being monitored in ways they are utterly unaware of.”

Legislation aimed at curbing data collection and enhancing transparency is an essential step forward. For instance, California has proposed bills to safeguard victims of domestic abuse from being tracked via their vehicles. However, more thorough measures are required to tackle the wider privacy and security dilemmas presented by connected cars.

### **Conclusion: A Cautionary Tale for Consumers and Automakers**

The Subaru Starlink vulnerability serves as a cautionary tale for both consumers and automakers.

Read More
“Evaluation of China’s Reusable Rocket Efficiency; DOT to Examine SpaceX Penalties”

### Rocket Lab Set to Deploy German Wildfire Detection Satellites: Advancing Global Environmental Oversight

Rocket Lab, a key player in the small satellite launch sector, has unveiled plans for a mission to launch eight wildfire detection satellites on behalf of the German firm OroraTech. This mission highlights the expanding significance of space technology in tackling global environmental issues, particularly the rising danger of wildfires intensified by climate change.

#### **The Mission: Rapid Progress for an Urgent Cause**
Rocket Lab’s Electron rocket will transport the eight satellites into orbit from its launch site in New Zealand. This mission demonstrates a swift progression from contract signing to launch, showcasing Rocket Lab’s flexibility in fulfilling time-critical demands. The satellites form part of OroraTech’s network aimed at delivering real-time wildfire surveillance, a function increasingly vital as wildfires occur more frequently and wreak havoc globally.

The launch is slated to take place in the coming weeks, coinciding with the urgent need for wildfire detection during this season. This fast-tracked deployment will allow OroraTech to bolster its worldwide monitoring abilities, facilitating quicker and more efficient wildfire response initiatives.

#### **OroraTech’s Vision: A Network of Infrared Observers**
OroraTech’s satellites come equipped with state-of-the-art thermal infrared cameras capable of continuous monitoring of wildfires on a global scale. These cameras can identify heat signatures from fires, supplying essential information to emergency services, forest administrators, and policymakers. The aim is to lessen the repercussions of wildfires on ecosystems, communities, and infrastructure by allowing for swifter response times and more educated decision-making.

This launch represents a pivotal achievement for OroraTech, which has previously deployed three prototype satellites since 2022. The company plans to grow its constellation to as many as 100 satellites by 2028, establishing a robust system for real-time wildfire observation.

#### **The Contribution of Space Technology to Wildfire Management**
Wildfires have emerged as a worldwide crisis, with areas such as California, Australia, and the Mediterranean facing increasingly intense fire seasons. Conventional methods of wildfire detection, like ground-based sensors and aerial surveillance, often fail to deliver thorough and timely information. In contrast, space-based systems provide a distinct perspective, enabling extensive monitoring and prompt identification of fire outbreaks.

OroraTech’s satellites belong to a wider movement towards utilizing space technology for environmental oversight. By supplying up-to-the-minute data regarding wildfire locations, intensity, and dissemination, these satellites can contribute to saving lives, safeguarding ecosystems, and minimizing economic damage.

#### **Rocket Lab’s Expanding Portfolio**
Rocket Lab has positioned itself as a frontrunner in the small satellite launch industry, offering specialized and agile launch services. The company’s Electron rocket is particularly adept for missions like OroraTech’s, where accuracy and swiftness are crucial.

This mission also underscores Rocket Lab’s dedication to fostering innovative uses of space technology. By facilitating the deployment of OroraTech’s wildfire detection satellites, Rocket Lab is playing a role in a global initiative to confront one of the most urgent environmental challenges of our era.

#### **Looking Forward: The Future of Space-Enabled Environmental Oversight**
The partnership between Rocket Lab and OroraTech illustrates the potential of space technology in tackling vital global challenges. As satellite technology progresses, we can anticipate a surge in applications across areas such as climate monitoring, disaster management, and resource allocation.

OroraTech’s ambitious objective to establish a constellation of up to 100 satellites by 2028 indicates a rising demand for space-based environmental remedies. With enterprises like Rocket Lab delivering dependable and effective launch services, the prospects for space-enabled environmental monitoring appear bright.

#### **Conclusion**
Rocket Lab’s forthcoming launch of OroraTech’s wildfire detection satellites signifies a key advancement in the battle against wildfires. By merging cutting-edge satellite technology with prompt launch services, this mission exemplifies the transformative impact of space technology in addressing global dilemmas. As the planet contends with the escalating effects of climate change, initiatives like this provide optimism for a more resilient and sustainable future.

Read More
Survey Shows Increase in Game Developers Concentrating on PC Games

# The Growth of PC Game Development: An Examination of the 2025 Boom

The gaming sector is well-acquainted with fluctuations in platform popularity, yet the latest insights from Informa’s *State of the Game Industry* survey indicate a remarkable trend: PC game development is witnessing a remarkable upswing. The survey shows that 80% of game developers are currently focused on PC projects, a rise from 66% just a year prior. This represents the highest recorded percentage for PC development since tracking began in 2018. The results underscore the increasing preeminence of the PC as a development platform and its lasting allure within a continually changing gaming environment.

## A Milestone in PC Development

The yearly *State of the Game Industry* survey, conducted with Omdia, surveyed over 3,000 professionals in the gaming industry in anticipation of the 2025 Game Developers Conference (GDC). The findings indicate a substantial rise in the proportion of developers engaged in PC projects, with 80% of respondents currently involved in PC game development. This is a significant leap from the 56–66% range seen in former years.

The survey also highlighted a growth in developer enthusiasm for the PC platform, with 74% expressing excitement for PC projects, compared to 62% last year. This simultaneous rise in active development and interest emphasizes a wider trend: the PC is not just a fundamental part of the gaming industry but a flourishing center for innovation and creativity.

## Rationale Behind the PC’s Preeminence

The PC has long been a preferred choice for game developers, consistently outshining consoles and mobile devices in terms of active projects. While console and mobile engagement ranges from 12% to 36% of developers, the PC’s adaptability, scalability, and global reach make it a lasting selection. Informa’s report describes this year’s increase as a “passion for PC development surging,” reinforcing the platform’s leadership.

The statistics also resonate with broader market movements. Steam, the leading PC gaming platform, recorded an extraordinary 18,974 individual game releases in 2024, as reported by SteamDB. This marks a 32% growth from 2023 and highlights the rising interest in PC gaming from both developers and gamers. However, it’s notable that not all Steam releases fulfill Valve’s minimum engagement criteria for features such as Badges and Trading Cards, indicating a blend of high-quality titles and smaller, less-refined projects.

## The Impact of the Steam Deck

A potential catalyst for the boom in PC game development is the rising popularity of Valve’s Steam Deck. This portable gaming device has revolutionized the way players experience PC games, offering the ability to enjoy favorite titles on the move. While Valve has confirmed “multiple millions” in sales, industry analysts estimate that between 3 million and 4 million Steam Deck units have been sold by late 2023, a substantial rise from 1 million units reported in 2022.

The success of the Steam Deck has likely encouraged developers to emphasize PC projects, as the device connects traditional PC gaming with the portability of consoles like the Nintendo Switch. The opportunity to play PC games on the couch, in bed, or even while traveling has widened the attraction of the platform, appealing to both developers and gamers.

## Expanding Market for PC Gaming

The increase in PC game development reflects a broader trend that underscores the platform’s resilience and growth. DFC Intelligence indicates there are nearly 2 billion PC gamers globally, solidifying its position as one of the largest gaming markets. Steam, a major player in the PC gaming ecosystem, recently recorded 39 million concurrent players, further highlighting the platform’s extensive popularity.

The versatility of the PC is another key element driving its success. In contrast to consoles, which are bounded by specific hardware generations, PCs offer various configurations that cater to both budget-conscious gamers and those seeking top-tier performance. This flexibility enables developers to craft games that can appeal to a wide audience, ranging from casual players to hardcore enthusiasts.

## Obstacles and Considerations

While the data suggests a positive outlook for PC gaming, it is essential to consider potential challenges. The *State of the Game Industry* survey derives from a self-selected group of developers, mainly those present at the GDC. This demographic may not fully represent the global game development community, which encompasses hobbyists and professionals outside traditional industry centers.

Moreover, the rise in PC game development might be influenced by temporary factors, such as the excitement surrounding new hardware like the Steam Deck. There is also a possibility that this year’s data reflects an atypical sample, as all major platforms (except the Xbox Series X/S) experienced slight increases in developer engagement.

## The Horizon of PC Gaming

Notwithstanding these considerations, the 2025 survey reaffirms the PC’s role as a cornerstone of the gaming industry. The platform’s adaptability to emerging trends, such as portable gaming and advanced graphical capabilities, guarantees its sustained significance. As developers continue to explore the opportunities presented by

Read More
“Backdoor Exploits VPNs Utilizing ‘Magic Packets’ for Improved Stealth and Protection”

**J-Magic Backdoor: An Advanced Menace Aimed at Corporate Networks**

In the dynamic realm of cybersecurity risks, the emergence of the J-Magic backdoor has raised alarms across various sectors. This intricate malware, constructed with precision and stealth, has been detected infiltrating enterprise VPNs utilizing Juniper Network’s Junos OS. Its elaborate design and distinctive attributes pose a serious threat to businesses in multiple industries, such as semiconductor, energy, manufacturing, and IT.

### **Understanding the Functionality of J-Magic**

J-Magic is not an ordinary backdoor malware. It features a passive agent that stays dormant until triggered by a “magic packet” — a uniquely crafted signal that activates the malware. This method allows the backdoor to evade detection from conventional network security measures, as it does not necessitate an open port for incoming connections. Instead, it discreetly observes all TCP traffic for particular conditions that will prompt its activation.

Once triggered, J-Magic takes further measures to secure its entry. It sends a challenge to the initiating device in the form of an encrypted string via the public part of an RSA key. The device is required to reply with the matching plaintext, thereby confirming possession of the private key. This RSA challenge safeguards against unauthorized access, ensuring that only those with valid credentials can enter.

### **In-Memory Execution for Greater Stealth**

A prominent trait of J-Magic is its ability to exist solely in memory. This makes detection and analysis by defenders drastically more difficult, as it leaves no traces on the storage of the compromised device. This in-memory execution, paired with its passive listening features, renders J-Magic a highly elusive threat.

### **The Function of Magic Packets**

At the heart of J-Magic’s stealthiness are magic packets. These packets contain particular data patterns that the malware identifies, enabling it to integrate smoothly with regular network traffic. Black Lotus Labs, the researchers who detected J-Magic, recognized five specific conditions that engage the backdoor. These conditions consist of unusual yet intentional configurations of TCP headers and payloads, ensuring the magic packets are both subtle and distinct.

For instance, one condition necessitates that the TCP options field includes a precise two-byte sequence, while another requires a predetermined string in the payload data. These stipulations are meticulously crafted to steer clear of detection by network defense systems, all while remaining distinctive enough to prevent inadvertent activation.

### **Reverse Shell and Command Execution**

Upon activation, J-Magic establishes a reverse shell, enabling attackers to run arbitrary commands on the infected machine. The reverse shell utilizes SSL for communication, providing an additional encryption layer to avoid detection. Attackers can then operate the device through a command prompt, marked by the “>>” symbol, until they issue an exit command.

To further fortify the connection, J-Magic employs a challenge-response system using a hardcoded RSA key. This guarantees that only authorized attackers can engage with the backdoor, thwarting other malicious entities from taking control of the compromised device.

### **Historical Context and Connections**

J-Magic isn’t the first malware to utilize magic packets or RSA challenges. The notion of a “truly invisible” backdoor was initially presented in 2000 with the launch of cd00r, a proof-of-concept backdoor aimed at evading detection by passively listening. In 2014, the Russian-state threat group Turla integrated a comparable technique into its own malware.

Interestingly, J-Magic bears resemblance to SeaSpy, another backdoor unveiled in 2023 that targeted Barracuda mail servers. Both backdoors are created to operate on FreeBSD, the OS employed in Juniper and Barracuda devices, and both heavily draw from the cd00r concept.

### **Consequences for Cybersecurity**

The identification of J-Magic emphasizes the growing sophistication of cyber threats. By leveraging advanced methods such as in-memory execution, magic packets, and RSA challenges, attackers are discovering innovative ways to elude detection and maintain presence in targeted networks.

For organizations, this serves as a sharp reminder regarding the necessity of robust cybersecurity practices. Traditional defenses like port scanning and signature-based detection have become insufficient against threats like J-Magic. Rather, organizations need to adopt advanced monitoring solutions capable of scrutinizing network traffic for nuanced anomalies and invest in threat intelligence to stay ahead of developing threats.

### **Conclusion**

J-Magic signifies a new evolution in backdoor malware, intertwining stealth, precision, and resilience to infiltrate and persist within corporate networks. Its discovery highlights the ongoing need for vigilance and innovation in cybersecurity. As threat actors continue to hone their techniques, defenders must remain just as nimble, utilizing cutting-edge technologies and collaborative intelligence to shield their networks from these sophisticated incursions.

Read More
OpenAI Unveils Operator: An AI Agent Tailored to Execute Tasks on Your Computer

# OpenAI’s “Operator” and the Computer-Using Agent: A New Chapter in AI-Driven Task Automation

OpenAI has introduced a revolutionary research preview of “Operator,” a web automation solution powered by its latest AI model, the **Computer-Using Agent (CUA)**. This advanced framework allows AI to engage with computers via a visual interface, imitating human behavior like clicking, typing, and scrolling. By utilizing screenshots and simulated interactions, Operator aims to transform the way users carry out tasks on their devices. Nevertheless, as is the case with any new technology, it comes with various limitations, safety issues, and privacy concerns.

## **What is Operator?**

Operator is an online tool intended to assist users with on-screen activities by replicating human-like interactions with a computer. Different from conventional automation tools that depend on pre-established scripts or APIs, Operator employs the Computer-Using Agent to visually perceive and interact with on-screen components in real time. This enables it to adjust to changing interfaces and execute tasks across diverse applications.

The tool is currently accessible to subscribers of OpenAI’s $200/month ChatGPT Pro plan, with aims to broaden availability to Plus, Team, and Enterprise users. OpenAI also plans to embed Operator’s functionalities directly into ChatGPT and to offer CUA via its API for developers.

## **How Does It Work?**

The Computer-Using Agent functions in a repetitive loop, comprising the following stages:

1. **Screen Monitoring**: The AI captures screenshots of the user’s display to comprehend the current interface status.
2. **Image Analysis**: Utilizing GPT-4o’s vision technology enhanced by reinforcement learning, the system analyzes raw pixel data to recognize on-screen elements, such as buttons, text boxes, and menus.
3. **Decision-Making**: Based on its assessment, the AI identifies suitable actions to perform, including clicking, typing, or scrolling.
4. **Execution**: The system executes virtual inputs to interact with the computer, imitating human behavior.

This loop enables the AI to recover from mistakes and tackle complex tasks, such as browsing websites, completing forms, or even organizing files on a computer.

## **Performance and Limitations**

While Operator holds potential, it is not without flaws. OpenAI’s internal evaluations indicate that the system excels at repetitive web tasks, such as generating shopping lists or playlists, but encounters difficulties with more intricate or unfamiliar interfaces. For instance:

– **Success Rates**:
– Achieved an **87% success rate** on the [WebVoyager](https://github.com/MinorJerry/WebVoyager) benchmark, evaluating live sites like Amazon and Google Maps.
– Scored **58.1%** on [WebArena](https://webarena.dev/), which assesses offline test sites.
– Recorded a **38.1%** on the

Read More
“Federal Agencies Instructed to Conclude Remote Work, Aiming for Transition Within 30 Days”

**US Agencies Squandering Billions on Vacant Offices: Return-to-Office Initiative Provokes Discussion**

As part of a decisive effort to combat what has been labeled a “national embarrassment,” all federal agencies in the United States have received orders to end remote work arrangements and present return-to-office (RTO) strategies by January 24, 2025. This mandate originates from a memorandum released by the acting director of the Office of Personnel Management (OPM), Charles Ezell, in the wake of President Donald Trump’s executive order on “Return to In-Person Work.” This resolution has rekindled discussions regarding the future of telework in the federal sector, the effectiveness of federal operations, and the economic ramifications of vacant office spaces.

### **The Drive for In-Person Work**

The memorandum underscores a stark reality: federal offices nationwide remain predominantly underoccupied, with numerous employees still working from home long after the COVID-19 pandemic made such arrangements necessary. Ezell’s memo condemns this phenomenon, asserting that “virtually unrestricted telework has resulted in diminished government services and complicated the supervision and training of government personnel.” The acting OPM director also mentioned broader economic consequences, asserting that the absence of in-person work has “devastated” local economies, especially in Washington, D.C., where many federal offices are situated.

The dilemma of unoccupied offices has surged as a concern, with a recent report from the House Committee on Oversight and Government Reform estimating that billions of taxpayer dollars are squandered on abandoned federal office space. The report charged the former Biden administration with inadequately assessing the impact of telework on agency functionality and mission success, labeling the absence of data a major lapse.

### **A “National Embarrassment”**

Ezell’s memo and the related House report both stress the symbolic and practical repercussions of vacant federal offices. Characterizing the situation as a “national embarrassment,” the memo contends that the current condition erodes public confidence in governmental efficiency and accountability. The report further rebuked federal unions for purportedly utilizing the collective-bargaining framework to secure indefinite telework arrangements, which it asserts have not been linked to measurable performance objectives.

The memo also raised alarms regarding the long-term sustainability of telework policies, suggesting that they have not effectively tackled recruitment and retention obstacles or boosted productivity. Instead, the report advocated for aligning remote work policies with performance metrics and monitoring telework through automated systems to guarantee accountability.

### **Economic and Operational Consequences**

The financial repercussions of retaining vacant office spaces are staggering. Federal agencies persist in paying for owned and leased properties that remain sparsely used, siphoning resources that could be redirected to other priorities. The House report proposed that selling off unnecessary properties and ending unwarranted leases could ease this financial strain on taxpayers.

Beyond the monetary concerns, the memo spotlighted operational difficulties posed by remote work, including challenges in supervising employees, promoting collaboration, and providing sufficient training. These complications, it argued, have obstructed federal agencies’ ability to effectively carry out their missions.

### **Exceptions and Flexibility**

Although the RTO mandate is comprehensive, it does permit certain exceptions. Employees with disabilities, qualifying medical conditions, or other compelling justifications certified by their agency head and supervisor may qualify for exemptions. Nevertheless, Ezell’s memo emphasized that such exceptions would be restricted and closely supervised.

The memo also recognized that prior efforts to motivate individual agencies to return employees to the office had generally failed, necessitating a centralized approach. “The only method to ensure employees return to the office is to implement a centralized policy mandating return-to-work for all agencies throughout the federal government,” Ezell asserted.

### **A Tight Deadline**

Federal agencies are faced with a rigorous deadline to adhere to the new directive. By 5 pm ET on January 24, 2025, all agencies need to present their RTO plans, detailing the date by which they will fully align with the new telework policy. Ezell suggested a 30-day timeline for full implementation, underlining the administration’s urgency in tackling this matter.

### **The Wider Discussion**

The RTO mandate has ignited a broader conversation concerning the future of work within the federal government. Advocates of in-person work assert that it is crucial for sustaining accountability, enhancing collaboration, and ensuring effective service delivery. Detractors, however, argue that telework provides significant advantages, including improved work-life balance, decreased commuting times, and the ability to attract talent from a wider geographic range.

The decision to limit remote work also prompts inquiries about the role of technology in modernizing government operations. As private-sector companies increasingly adopt hybrid work models, the federal government’s decision to mandate in-person work could be perceived as a regression, potentially hindering its competitiveness in attracting top talent.

### **Conclusion**

The return-to-office directive signifies a major pivot in federal workforce policy, with extensive implications for government operations, taxpayer expenditures, and employee morale. While the initiative seeks to rectify the inefficiencies and costs associated with vacant office spaces, it

Read More