❌

Reading view

What to expect from WWDC 2026

WWDC 2026, the latest version of Apple's yearly developer conference, runs from June 8-12, and by all appearances the company has some important updates to outline. In comparison to Liquid Glass, the design material Apple introduced last year and now uses across all its operating systems, the new features the company is rumored to announce might not be aesthetic, but they could make just as big of a splash. Namely because Apple might finally be ready to show off its second stab at an overhauled version of Siri.

If you're curious to see the company's new plans for yourself, you can watch Apple's WWDC 2026 keynote live on its website, YouTube channel or the Apple Developer Bilibili channel in China. Apple will also host its Platforms State of the Union stream and individual developer workshops on its developer website if you want to learn even more details about the software updates the company will release later this year. Luckily, we do have some sense of what Apple has in store, and it looks like stability improvements and AI are the company's big focuses for the updates coming to iOS, iPadOS, macOS, watchOS, visionOS and tvOS this fall.

A Snow Leopard-esque approach to stability and performance

Apple released Mac OS X Snow Leopard in 2009, primarily as a way to clean up the performance and refine the new features the company released with Mac OS X Leopard two years prior. The decision to essentially "take a year off" to focus on making everything about the company's desktop operating system feel better was well-received, and Apple is apparently planning to have iOS 27 serve a similar role.

Bloomberg reports that Apple's upcoming update will be "focused on improving the software’s quality and underlying performance" and that the company's "engineering teams are now combing through Apple's operating systems, hunting for bloat to cut, bugs to eliminate and any opportunity to meaningfully boost performance and overall quality." Those fixes will presumably extend to the company's other operating systems, too.

Some of this effort may also be focused on cleaning up the visual changes introduced in Apple's big switch to Liquid Glass. The design overhaul has been controversial among the company's diehard fans, and Apple has already introduced tweaks in updates that arrived after the release of iOS 26 to make Liquid Glass interfaces more legible. Bloomberg reports the company could go a step further in its next updates and add a system-wide slider that will allow users to adjust the intensity of Liquid Glass (visual effects like translucency and reflectivity) they want in the interface.

The chatbot-ification of Siri

While stability and performance improvements will be a major focus of this year's updates, Apple is also rumored to be making some major changes to Siri. When the company first introduced Apple Intelligence at WWDC 2024, it promised to launch an updated version of the voice assistant that could use your personal context (like the information securely stored on your iPhone) to act across apps. Apple delayed those features in March 2025 and then announced a partnership with Google in January 2026 to use Gemini models to presumably make them possible.Β 

Those features might finally arrive in this year's updates, but Apple is reportedly also changing how users interact with Siri by making the assistant more like a chatbot, according to Bloomberg. This would make the assistant more interactive and natural to speak to, and could open up other possibilities, like letting users direct Siri to perform two actions at the same time. Developers will reportedly also be able to integrate their own AI assistants with Siri, much like OpenAI has with ChatGPT.

New places to talk to AI

The chatbot version of Siri will be accessible in the usual ways, but also reportedly through a standalone Siri app. The new app will let users prompt the assistant to take care of tasks on their device, search the web and even access news, not unlike current Gemini and ChatGPT apps. Bloomberg writes that the app will also be a way to review past conversations with Siri and receive suggestions of prompts to try with the new chatbot version of the assistant.

Users will also be able to interact with Siri inside Apple's other apps via a new feature called "Ask Siri." This may appear as an option in app menus, and allow you to ask the AI assistant questions about content in the app. It's not clear if this will be as in-depth or capable as Google's Ask Maps or Ask Photos features, but it at least seems like Apple's thinking along the same lines as its partner.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/what-to-expect-from-wwdc-2026-110000086.html?src=rss

Β©

  •  

Tesla's robotaxis are reportedly remotely driven by humans, sometimes

In a letter shared with Senator Ed Markey (D-Mass.), Tesla admitted that its robotaxis are sometimes driven remotely by human operators, Wired reports. Competing self-driving car companies sometimes rely on human operators to tell robotaxi software how to get itself unstuck, but letting operators actually drive those cars remotely is more unusual.

"​​As a redundancy measure in rare cases … [remote assistance operators] are authorized to temporarily assume direct vehicle control as the final escalation maneuver after all other available intervention actions have been exhausted,” Karen Steakley, Tesla’s director of public policy and business development, shared in a letter to Markey. In those situations, operators are reportedly able to take over Tesla's robotaxis when they're moving at speeds around 2mph or less, and then drive the car at up to 10mph if software permits it.

Engadget has contacted Tesla to confirm the details shared in Steakley's letter. We'll update the article if we hear back.

As Wired notes, that's a bit different than how other self-driving car companies handle human intervention. For example, Waymo's Driver software can call on human help β€” Waymo calls them "fleet response" β€” to offer context and answer questions to help it navigate complicated driving situations. The company claims these workers never drive the robotaxi themselves, but they are able to see the car's environment through its sensors to help it get unstuck. Self-driving car companies typically avoid remote operation, Wired writes, because technical limitations like latency and the limited perspective of a robotaxi’s sensors can make it hard to drive them easily and safely.

Tesla's approach to self-driving has always cut against the grain, though. Whereas competitors continue to rely on a mix of radar and other sensors to navigate, Tesla has exclusively focused on using cameras for its Full Self Driving (FSD) system. The company has also had to deal with a number of high-profile crashes related to FSD, which prompted a probe by the US National Highway Traffic Safety Administration in October 2025.

The company launched its robotaxi service in Austin, Texas in June 2025, in a limited capacity and with human safety drivers sitting in the driver's seat in case of emergency. Tesla is also reportedly testing rides without safety drivers in the same area, which might be why it has contingencies for remote operators to step in.

This article originally appeared on Engadget at https://www.engadget.com/transportation/teslas-robotaxis-are-reportedly-remotely-driven-by-humans-sometimes-200639550.html?src=rss

Β©

  •  

Someone programmed a 65-year old computer to play Boards of Canada's 'Olson'

The Programmed Data Processor-1 (PDP-1) is perhaps most recognizable as the home of Spacewar!, one of the world's first video games, but as the video above proves, it also works as an enormous and very slow iPod, too.

In the video, Boards of Canada's "Olson" is playing off of paper tape that's carefully fed and programmed into the PDP-1 by engineer and Computer History Museum docent Peter Samson. It's the final product of Joe Lynch's PDP-1.music project, an attempt to translate the short and atmospheric song into something the PDP-1 can reproduce.Β 

As Lynch writes on GitHub, the "Harmony Compiler" used to translate "Olson" to paper tape was actually created by Samson to play audio through four of computer's lightbulbs while he was a student at MIT in the 1960s. He used it to recreate classical music, but it'll work with '90s electronic music in a pinch, too.

"While these bulbs were originally intended to provide program status information to the computer operator," Lynch writes, "Peter repurposed four of these light bulbs into four square wave generators (or four 1-bit DACs, put another way), by turning the bulbs on and off at audio frequencies." The signal from each bulb is then downmixed into stereo audio channels, transcribed via an emulator and merged into a single file that has to be manually punched into the paper tape that's fed into the PDP-1.

It's a laborious process for playing even the simplest of songs, but it's worth it to hear Boards of Canada's already nostalgic music from an even older classic computer.

This article originally appeared on Engadget at https://www.engadget.com/audio/someone-programmed-a-65-year-old-computer-to-play-boards-of-canadas-olson-220857441.html?src=rss

Β©

Β© Joe Lynch

Peter Samson loading a song into the PDP-1.
  •  

The final details of Samsung's Android XR headset have been all but confirmed

After announcing its intentions to make an XR device in 2023, and revealing the design and intended use-cases for the headset alongside the announcement of Android XR in 2024, Samsung has shared precious few details about Project Moohan. A new leak from Android Headlines is set to change that, detailing not only the specs of Samsung's new headset, but also a final name and new controller accessories ahead of the device's rumored launch later this fall.

Samsung's Project Moohan β€” officially called "Samsung Galaxy XR" per Android Headlines β€” is a marriage of sorts between the discontinued Meta Quest Pro and an Apple Vision Pro. It features an adjustable headband, primarily acts as passthrough goggles to the world around you and supports an external battery pack. While Samsung's demos of the Project Moohan focused on the headset's ability to accept voice commands and track eye and hand movements through built-in microphones and cameras, Android Headlines reports the headset will also support two controller accessories that look a lot like Meta's Touch Plus controllers for the Quest 3.

A grid of apps reportedly from Samsung's Project Moohan headset.
Android Headlines

More expected are the internals and software experience on the new device. Project Moohan will use a Snapdragon XR2 Gen 2 chip to power its One UI-ified version of Android XR, just as Qualcomm promised when it announced the new processor in 2024. Samsung appears to be taking a lighter touch when it comes to software. Screenshots shared by Android Headlines show an app grid with the company's browser, photos and camera apps, but the rest lines up with what Google's shown of Android XR.Β 

The headset will also reportedly feature one high-resolution 4K micro-OLED screen per eye, as previously rumored by Korean publication The Elec, and around a two hours of battery life, which is comparable to the Vision Pro. Importantly, Project Moohan is also lighter. The headset reportedly weighs 545 grams, a good bit less than the over 600-gram Apple headset.

The only thing really missing now is a price for Project Moohan and a release date. Samsung shared in its Q2 2025 earnings that it still expected to ship the headset in 2025, but hasn't announced an event to introduce the new device. Whenever it does launch, it sounds like it'll be expensive. In August 2025, rumors pointed to Project Moohan costing anywhere from 2,500,000 to 4,000,000 Korean won (around $1,700 to $2,800).

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/the-final-details-of-samsungs-android-xr-headset-have-been-all-but-confirmed-200915560.html?src=rss

Β©

Β© Samsung

Three silver Project Moohan headset on display in Samsung's MWC 2025 booth.
  •  

OpenAI's TikTok of AI slop hit one million downloads faster than ChatGPT

Sora, OpenAI's app and social network for AI-generated videos, has been downloaded over one million times, according to Sora head Bill Peebles. The app reached one million downloads in less than five days, Peebles says, "even faster than ChatGPT did." That's despite OpenAI only making the app available in North America, and its decision to require users to have an invite to actually use it.

Like TikTok, Sora offers an endless vertical feed of videos, only Sora's videos are AI-generated rather than uploaded by users. Creating a 10-second video of your own is as simple as writing a prompt to OpenAI's Sora 2 model in the app. And through the Sora's Cameo feature, you can even create videos of yourself and anyone else who's agreed to share their likeness to the service.

sora hit 1M app downloads in <5 days, even faster than chatgpt did (despite the invite flow and only targeting north america!)!

team working hard to keep up with surging growth. more features and fixes to overmoderation on the way!

β€” Bill Peebles (@billpeeb) October 9, 2025

The limited guardrails OpenAI has put on Sora has already led to a rash of videos featuring OpenAI's Sam Altman and content that clearly infringes on copyright. The fact that Sora can so readily create videos of recognizable characters like Pikachu raises questions about what OpenAI's model was trained on, and has unsurprisingly prompted pushback from the larger entertainment industry.

In response, the company has updated Sora to give users more control over what videos their likeness can appear in. OpenAI plans to offer similar controls to rights holders, giving them "the ability to specify how their characters can be used (including not at all)," according to Altman. It's not clear why these controls weren't available when Sora launched, but both seem like good changes.

Because of Sora's invite system, it's difficult to say if the over one million downloads the app has received translates to as many users. It's not unusual for someone to download an app and never use it. Whatever the case, OpenAI's bet on AI-generated videos seems like it might be a winning one, provided the company finds a way to actually make more money than it looses generating videos for Sora.

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-tiktok-of-ai-slop-hit-one-million-downloads-faster-than-chatgpt-181216271.html?src=rss

Β©

Β© OpenAI

The Sora app icon featuring a white cloud with sparkly eyes.
  •