Feed aggregator

Alibaba Creates AI Chip To Help China Fill Nvidia Void

Slashdot.org - Sat, 08/30/2025 - 05:00
Alibaba, China's largest cloud-computing company, has developed a domestically manufactured, versatile inference chip to fill the gap left by U.S. restrictions on Nvidia's sales in China. The Wall Street Journal reports: Previous cloud-computing chips developed by Alibaba have mostly been designed for specific applications. The new chip, now in testing, is meant to serve a broader range of AI inference tasks, said people familiar with it. The chip is manufactured by a Chinese company, they said, in contrast to an earlier Alibaba AI processor that was fabricated by Taiwan Semiconductor Manufacturing. Washington has blocked TSMC from manufacturing AI chips for China that use leading-edge technology. [...] Private-sector cloud companies including Alibaba have refrained from bulk orders of Huawei's chips, resisting official suggestions that they should help the national champion, because they consider Huawei a direct rival in cloud services, people close to the firms said. China's biggest weakness is training AI models, for which U.S. companies rely on the most powerful Nvidia products. Alibaba's new chip is designed for inference, not training, people familiar with it said. Chinese engineers have complained that homegrown chips including Huawei's run into problems when training AI, such as overheating and breaking down in the middle of training runs. Huawei declined to comment.

Read more of this story at Slashdot.

China Turns On Giant Neutrino Detector That Took a Decade To Build

Slashdot.org - Sat, 08/30/2025 - 02:00
China has turned on the world's most sensitive neutrino detector after more than a decade of construction. The Register reports: The Jiangmen Underground Neutrino Experiment (JUNO) is buried 700 meters under a mountain and features a 20,000-tonne "liquid scintillator detector" that China's Academy of Science says is "housed at the center of a 44-meter-deep water pool." There's also a 35.4-meter-diameter acrylic sphere supported by a 41.1-meter-diameter stainless steel truss. All that stuff is surrounded by more than 45,000 photo-multiplier tubes (PMTs). The latter devices are super-sensitive light detectors. A liquid scintillator is a fluid that, when exposed to ionizing radiation, produces light. At JUNO, the liquid is 99.7 percent alkylbenzene, an ingredient found in detergents and refrigerants. JUNO's designers hope that any neutrinos that pass through its giant tank bonk a hydrogen atom and produce just enough light that the detector array of PMTs can record their passing, producing data scientists can use to learn more about the particles. At this point, readers could sensibly ask how JUNO will catch any of these elusive particles. The answer lies in the facility's location -- a few tens of kilometers away from two nuclear power plants that produce neutrinos. The Chinese Academy of Science's Journal of High Energy Physics says trials of JUNO succeeded, suggesting it will be able to help scientists understand why some neutrinos are heavier than others so we can begin to classify the different types of the particle -- a key goal for the facility. The Journal also reports that scientists from Japan, the United States, Europe, India, and South Korea, are either already using JUNO or plan experiments at the facility.

Read more of this story at Slashdot.

Collapse of Critical Atlantic Current Is No Longer Low-Likelihood, Study Finds

Slashdot.org - Fri, 08/29/2025 - 22:30
An anonymous reader quotes a report from The Guardian: The collapse of a critical Atlantic current can no longer be considered a low-likelihood event, a study has concluded, making deep cuts to fossil fuel emissions even more urgent to avoid the catastrophic impact. The Atlantic meridional overturning circulation (Amoc) is a major part of the global climate system. It brings sun-warmed tropical water to Europe and the Arctic, where it cools and sinks to form a deep return current. The Amoc was already known to be at its weakest in 1,600 years as a result of the climate crisis. Climate models recently indicated that a collapse before 2100 was unlikely but the new analysis examined models that were run for longer, to 2300 and 2500. These show the tipping point that makes an Amoc shutdown inevitable is likely to be passed within a few decades, but that the collapse itself may not happen until 50 to 100 years later. The research found that if carbon emissions continued to rise, 70% of the model runs led to collapse, while an intermediate level of emissions resulted in collapse in 37% of the models. Even in the case of low future emissions, an Amoc shutdown happened in 25% of the models. Scientists have warned previously that Amoc collapse must be avoided "at all costs." It would shift the tropical rainfall belt on which many millions of people rely to grow their food, plunge western Europe into extreme cold winters and summer droughts, and add 50cm to already rising sea levels. The new results are "quite shocking, because I used to say that the chance of Amoc collapsing as a result of global warming was less than 10%," said Prof Stefan Rahmstorf, at the Potsdam Institute for Climate Impact Research in Germany, who was part of the study team. "Now even in a low-emission scenario, sticking to the Paris agreement, it looks like it may be more like 25%. "These numbers are not very certain, but we are talking about a matter of risk assessment where even a 10% chance of an Amoc collapse would be far too high," added Rahmstorf. "We found that the tipping point where the shutdown becomes inevitable is probably in the next 10 to 20 years or so. That is quite a shocking finding as well and why we have to act really fast in cutting down emissions." "Observations in the deep [far North Atlantic] already show a downward trend over the past five to 10 years, consistent with the models' projections," said Prof Sybren Drijfhout, at the Royal Netherlands Meteorological Institute, who was also part of the team. "Even in some intermediate and low-emission scenarios, the Amoc slows drastically by 2100 and completely shuts off thereafter. That shows the shutdown risk is more serious than many people realize." The findings have been published in the journal Environmental Research Letters.

Read more of this story at Slashdot.

AWS CodeDeploy Troubleshooting

BrandonChecketts.com - Fri, 08/29/2025 - 21:38

CodeDeploy with AutoScalingGroups is a bit of a complex mess to get working correctly. Especially with an app that has been working and needs to be updated for more modern functionality

Update the startups scripts with the latest versions from https://github.com/aws-samples/aws-codedeploy-samples/tree/master/load-balancing/elb-v2

I found even the latest scripts there still not working. My instances were starting up then dying shortly afterward. CodeDeploy was failing with the error


LifecycleEvent - ApplicationStart
Script - /deploy/scripts/4_application_start.sh
Script - /deploy/scripts/register_with_elb.sh
[stderr]Running AWS CLI with region:
[stderr][FATAL] Unable to get this instance's ID; cannot continue.

Upon troubleshooting, I found that common_functions.sh has the get_instance_id() function that was running this curl command to get the instance ID


curl -s http://169.254.169.254/latest/meta-data/instance-id

Running that command by itself while an instance was still running returned nothing, which is why it was failing.

It turns out that newer instances use IMDSv2 by default, and it is required (no longer optional). With that configuration, this curl command will fail. In order to fix, this, I replaced the get_instance_id() function with this version:

# Usage: get_instance_id # # Writes to STDOUT the EC2 instance ID for the local instance. Returns non-zero if the local # instance metadata URL is inaccessible. get_instance_id() { TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" -s -f) if [ $? -ne 0 ] || [ -z "$TOKEN" ]; then echo "[FATAL] Failed to obtain IMDSv2 token; cannot continue." >&2 return 1 fi INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id -s -f) if [ $? -ne 0 ] || [ -z "$INSTANCE_ID" ]; then echo "[FATAL] Unable to get this instance's ID; cannot continue." >&2 return 1 fi echo "$INSTANCE_ID" return 0 }

This version uses the IMDSv2 API to get a token and uses that token to get the instance-id

With that code replaced, the application successfully registered with the Target Group and the AutoScaling group works correctly

Alternatively (and for troubleshooting), I was able to make IMDSv2 Optional using the AWS Console, and via CloudFormation with this part of the Launch Template:

Resources: MyLaunchTemplate: Type: AWS::EC2::LaunchTemplate Properties: LaunchTemplateName: my-launch-template LaunchTemplateData: ImageId: ami-1234567890abcdef0 InstanceType: t4g.micro MetadataOptions: HttpTokens: optional

The post AWS CodeDeploy Troubleshooting appeared first on Brandon Checketts.

Categories: Web

Mastodon Says It Doesn't 'Have the Means' To Comply With Age Verification Laws

Slashdot.org - Fri, 08/29/2025 - 20:25
Mastodon says it cannot comply with Mississippi's new age verification law because its decentralized software does not support age checks and the nonprofit lacks resources to enforce them. "The social nonprofit explains that Mastodon doesn't track its users, which makes it difficult to enforce such legislation," reports TechCrunch. "Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says." From the report: The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, "there is nobody that can decide for the fediverse to block Mississippi." (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.) "And this is why real decentralization matters," said Rochko. Masnick pushed back, questioning why Mastodon's individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law. On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon's own servers specify a minimum age of 16 to sign up for its services, it does not "have the means to apply age verification" to its services. That is, the Mastodon software doesn't support it. The Mastodon 4.4 release in July 2025 added the ability to specify a minimum age for sign-up and other legal features for handling terms of service, partly in response to increased regulation around these areas. The new feature allows server administrators to check users' ages during sign-up, but the age-check data is not stored. That means individual server owners have to decide for themselves if they believe an age verification component is a necessary addition. The nonprofit says Mastodon is currently unable to provide "direct or operational assistance" to the broader set of Mastodon server operators. Instead, it encourages owners of Mastodon and other Fediverse servers to make use of resources available online, such as the IFTAS library, which provides trust and safety support for volunteer social network moderators. The nonprofit also advises server admins to observe the laws of the jurisdictions where they are located and operate. Mastodon notes that it's "not tracking, or able to comment on, the policies and operations of individual servers that run Mastodon." Bluesky echoed those comments in a blog post last Friday, saying the company doesn't have the resources to make the substantial technical changes this type of law would require.

Read more of this story at Slashdot.

Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into 'Romantic' Conversations

Slashdot.org - Fri, 08/29/2025 - 19:45
Meta is rolling out temporary restrictions on its AI chatbots for teens after reports revealed they were allowed to engage in "romantic" conversations with minors. A Meta spokesperson said the AI chatbots are now being trained so that they do not generate responses to teens about subjects like self-harm, suicide, disordered eating or inappropriate romantic conversations. Instead, the chatbots will point teens to expert resources when appropriate. CNBC reports: "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety. Further reading: Meta Created Flirty Chatbots of Celebrities Without Permission

Read more of this story at Slashdot.

Vivaldi Browser Doubles Down On Gen AI Ban

Slashdot.org - Fri, 08/29/2025 - 19:02
Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies." Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about." Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."

Read more of this story at Slashdot.

Battlefield 6 Dev Apologizes For Requiring Secure Boot To Power Anti-Cheat Tools

Slashdot.org - Fri, 08/29/2025 - 18:20
An anonymous reader quotes a report from Ars Technica: Earlier this month, EA announced that players in its Battlefield 6 open beta on PC would have to enable Secure Boot in their Windows OS and BIOS settings. That decision proved controversial among players who weren't able to get the finicky low-level security setting working on their machines and others who were unwilling to allow EA's anti-cheat tools to once again have kernel-level access to their systems. Now, Battlefield 6 technical director Christian Buhl is defending that requirement as something of a necessary evil to combat cheaters, even as he apologizes to any potential players that it has kept away. "The fact is I wish we didn't have to do things like Secure Boot," Buhl said in an interview with Eurogamer. "It does prevent some players from playing the game. Some people's PCs can't handle it and they can't play: that really sucks. I wish everyone could play the game with low friction and not have to do these sorts of things." Throughout the interview, Buhl admits that even requiring Secure Boot won't completely eradicate cheating in Battlefield 6 long term. Even so, he offered that the Javelin anti-cheat tools enabled by Secure Boot's low-level system access were "some of the strongest tools in our toolbox to stop cheating. Again, nothing makes cheating impossible, but enabling Secure Boot and having kernel-level access makes it so much harder to cheat and so much easier for us to find and stop cheating." [...] Despite all these justifications for the Secure Boot requirement on EA's part, it hasn't been hard to find people complaining about what they see as an onerous barrier to playing an online shooter. A quick Reddit search turns up dozens of posts complaining about the difficulty of getting Secure Boot on certain PC configurations or expressing discomfort about installing what they consider a "malware rootkit" on their machine. "I want to play this beta but A) I'm worried about bricking my PC. B) I'm worried about giving EA complete access to my machine," one representative Redditor wrote.

Read more of this story at Slashdot.

Meta Created Flirty Chatbots of Celebrities Without Permission

Slashdot.org - Fri, 08/29/2025 - 17:40
Reuters has found that Meta appropriated the names and likenesses of celebrities to create dozens of flirty social-media chatbots without their permission. "While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift 'parody' bots." From the report: Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image. "Pretty cute, huh?" the avatar wrote beneath the picture. All of the virtual celebrities have been shared on Meta's Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots' behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups. Some of the AI-generated celebrity content was particularly risque: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread. Meta spokesman Andy Stone told Reuters that Meta's AI tools shouldn't have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta's production of images of female celebrities wearing lingerie on failures of the company's enforcement of its own policies, which prohibit such content. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery," he said. While Meta's rules also prohibit "direct impersonation," Stone said the celebrity characters were acceptable so long as the company had labeled them as parodies. Many were labeled as such, but Reuters found that some weren't. Meta deleted about a dozen of the bots, both "parody" avatars and unlabeled ones, shortly before this story's publication.

Read more of this story at Slashdot.

Comment