Engineering Community Portal

MERLOT Engineering
Share

Welcome  – From the Editor

Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.

As you scroll this page, you will find many Engineering resources.  This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.  

Showcase

Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy.  Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics. 

Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online.  Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned. 

Made possible under a Department of Labor grant.  Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date. 

Engineering Technology 3D Simulations in MERLOT

New Materials

New Members

Engineering on the Web

  • Aurora Avionics supports ATMOS PHOENIX 2.1 with onboard data acquisition systems
    Mar 04, 2026 09:01 AM PST
  • Engineering in the fast lane | UDaily - University of Delaware
    Mar 04, 2026 08:59 AM PST
  • Mayville Engineering (MEC) Earnings Transcript | The Motley Fool
    Mar 04, 2026 08:32 AM PST
  • Behind the research at UTC: Dr. Jejal Bathi | UTC News
    Mar 04, 2026 08:29 AM PST
  • CFX PROJECTS RECEIVE PRESTIGIOUS ACEC 2026 ENGINEERING EXCELLENCE AWARDS
    Mar 04, 2026 07:54 AM PST
  • Boone council approves preliminary engineering agreement for U.S. 421 multi-use pathway
    Mar 04, 2026 07:39 AM PST
  • Taara Brings Fiber-Optic Speeds to Open-Air Laser Links
    Mar 04, 2026 07:00 AM PST
    Taara started as a Google X moonshot spinoff aimed at connecting rural villages in sub-Saharan Africa with beams of light. Its newest product, debuting this week at Mobile World Congress in Barcelona, aims at a different kind of connectivity problem: getting internet access into buildings in cities that already have plenty of fiber—just not where it’s needed. The Sunnyvale, Calif.-based company transmits data via infrared lasers, of the typically used in fiber optic lines. However, Taara’s systems beam gigabits across kilometers over open air. “Every one of our Taara terminals is like a digital camera with a laser pointer,” says Mahesh Krishnaswamy, Taara’s CEO. “The laser pointer is the one that’s shining the light on and off, and the digital camera is on the [receiving] side.” Taara’s new system—Taara Beam, being demoed at MWC’s “Game Changers” platform—prioritizes efficiency and a compact size. Each Beam unit is the size of a shoebox and weighs just 7 kilograms, and can be mounted on a utility pole or the side of a building. According to the company, Beam will deliver fiber-competitive speeds of up to 25 gigabits per second with low, 50-microsecond latency. Taara’s former parent company, Krishnaswamy says, is also these days a prominent client. The search engine giant’s main campus in Mountain View, Calif. is near a landing point for a major submarine fiber optic cable. “One of the Google buildings was literally a few hundred meters away from the landing spot in California,” he says. “Yet they couldn’t connect the two points because of land rights and right of way issues. … Without digging and trenching into federal land, we are able to connect the two points at tens of gigabits per second. And so many Googlers are actually using our technology today.” A Fingernail-sized Chip Shrinks Taara’s Tech The laser pointer and digital camera analogy, Krishnaswamy adds, doesn’t quite do justice to the engineering problems the company had to tackle to fit all the gigabit-per-second photonics into a weather-hardened, shoebox-sized device. The Taara Beam, for one, needs to steer its laser link across kilometers of open air—so that its laser can be received by the Beam device on the other end of the line. Effectively, that means the device’s laser can only be off target by no more than a few degrees. Beam approaches the steering problem by physically shaping the laser pulse itself. Taara’s photonics chip splits the laser beam carrying the data into more than a thousand separate streams, delaying each one by a closely-controlled amount. The result is a laser wavefront that can be pointed anywhere the system directs. Krishnaswamy makes an analogy to pebbles tossed into a pond. Dropping pebbles in a careful sequence, he says, can create interference patterns in the waves that ripple outwards. “These thousand emitters are equivalent to a thousand stones,” he says. “And I’m able to delay the phase of each of them. That allows me to steer [the wavefront] whichever direction I want it to go.” The idea behind this technology—called a phased array—is not new. But turning it into a commercial optical communications device, at Taara Beam’s scale and range, is where others have so far fallen short. “Radio frequency phased arrays like Starlink antennas are well known,” Krishaswamy says. “But to do this with optics, and in a commercial way, not just an experimental way, is hard.” This isn’t how the company started out, however. Krishnaswamy says in 2020, when the company was still a Google X subsidiary, Taara launched its first commercial product, the traffic light-sized Lightbridge. Like Beam, Lightbridge boasts fiber-like connection speeds, and it has to date been deployed in more than 20 countries around the world—including the Google campus, described above. Taara’s upgraded model, Lightbridge Pro, launched last month and is also on display this week at MWC. Lightbridge Pro adds one crucial capability Lightbridge lacked, an automatic backup. When fog or rain disrupts Lightbridge’s optical link, the system switches traffic over to a paired radio connection. When conditions clear, Lightbridge Pro switches traffic back to the faster laser data connection. The company says that combination keeps the link up 99.999 percent of the time—less than five minutes of downtime in a year. Both Lightbridge and Lightbridge Pro mechanically position their mirrors, achieving three degrees of pointing accuracy. An onboard tracking system inside the unit also re-locks the beams automatically whenever the unit gets shifted or jostled. The Future of Taara Beam Deployment Krishaswamy says while the company continues to install and support Lightbridge and Lightbridge Pro, he hopes the company can also begin installing Taara Beam units for select early customers as soon as later this year. According to Mohamed-Slim Alouini—distinguished professor of electrical and computer engineering at King Abdullah University of Science and Technology (KAUST) in Thuwal, Makkah Province, Saudi Arabia—the bandwidth of free-space optical (FSO) technologies like Taara Beam and Lightbridge still leaves plenty of room to grow. “Like any physical medium, free-space optics has a capacity limit,” Alouini says. “But laboratory experiments have already demonstrated fiber-like performance with terabits-per-second data rates over FSO links. The real gap is not in raw capacity but in practical deployment.” Atul Bhatnagar, formerly of Nortel and Cambium Networks, and currently serving as advisor to Taara, sees room for optimism even when it comes to practical deployment. “Current Taara architecture is capable of delivering hundreds of gigabits per second over the next several years,” he says. Krishnaswamy adds that Beam’s compact form factor makes it suitable not just for terrestrial applications either. “We’ll continue to do the work that we’re doing on the ground, but to the extent that space solutions are taking off we would love to be part of that,” he says. “Data-center-to-data-center in space is something we are really looking at using for this technology. “Because when you have multiple servers up in space, you can’t run fiber from one to the other,” he adds. “But these photonics modules will be able to point and track and transmit gigabits and gigabits of data to each other.” For now, the company’s ambitions are closer to Earth—specifically to the buildings, utility poles, and city blocks where fiber still hasn’t arrived. Which is, after all, where the company’s whole story began.
  • Engineering student uses computer simulations to shape the future of high-speed flights
    Mar 04, 2026 06:53 AM PST
  • YIXUN's Top Multi-Axial Warp Knitting Machine With Multi-Angle Weft Insertion - Bluffton Today - XPR
    Mar 04, 2026 06:37 AM PST
  • Enverus to acquire SBS to power AI-driven utility planning and engineering - PR Newswire
    Mar 04, 2026 06:26 AM PST
  • FAU lands $4.5M US Air Force T-1A Jayhawk flight simulator - EurekAlert!
    Mar 04, 2026 06:17 AM PST
  • Enverus to acquire SBS to power AI-driven utility planning and engineering
    Mar 04, 2026 06:15 AM PST
  • Alumni Q&A: Hital Meswani - Penn Engineering
    Mar 04, 2026 05:57 AM PST
  • Engineering a Cleaner Way to Extract Lithium - Eos.org
    Mar 04, 2026 05:17 AM PST
  • Brandauer electrifies engineering partner vision with two high-profile automotive contracts
    Mar 04, 2026 05:06 AM PST
  • If you majored in this subject in college in N.J., you're likely averaging over $120K a year
    Mar 04, 2026 04:16 AM PST
  • Civil and Environmental Engineering MS degree at NAU
    Mar 04, 2026 03:48 AM PST
  • Northwest Omaha growth drives MUD to build $45M pump station and 12 million gallon reservoir
    Mar 04, 2026 03:42 AM PST
  • Team Tobyhanna engineers featured for National Engineers Week - U.S. Army
    Mar 04, 2026 03:31 AM PST
  • Zendure Drives Sustainable Energy Innovation on World Engineering Day 2026 | Corporate
    Mar 04, 2026 02:50 AM PST
  • Love Letter to Humanity - USC Viterbi | School of Engineering
    Mar 04, 2026 02:48 AM PST
  • This Offshore Wind Turbine Will House a Data Center Underwater
    Mar 03, 2026 12:56 PM PST
    As data-center developers frantically seek to secure power for their operations, one startup is proposing a novel solution: Build them into floating offshore wind turbines. San Francisco–based offshore wind-power developer Aikido Technologies today announced its plans to start housing data centers in the underwater tanks that keep its turbine platforms afloat. The turbines will supply the power for the servers, and onboard batteries and grid connection will provide backup. The company’s first prototype, a 100-kilowatt unit, is scheduled to launch in the North Sea off the coast of Norway by the end of this year. A 15-to-18-megawatt project off the coast of the United Kingdom may follow in 2028. Aikido is one of several companies planning data centers in unusual places—underwater, on floating buoys, in coal mines and now on offshore wind turbines. The creativity stems from the forces of several trends: rapidly rising energy demand from data centers, the need for domestic renewable power production, and limited real estate. The North Sea serves as an ideal first spot for floating, wind-powered data centers because European policymakers and companies are looking to regain domestic control over energy production. They’re also looking to host an AI economy on servers within the continent’s boundaries. Floating wind platforms keep the compute out of sight while tapping the stronger, more consistent air streams that blow over deep waters, where traditional, seabed-mounted turbine monopiles can’t go. “A lot of energy in the clean-energy space is focused on powering AI data centers quickly, reliably, and cleanly in a way that does not upset neighbors and remains safe, fast, and cheap,” says Ramez Naam, an independent clean-energy investor who does not have a stake in Aikido. “Aikido has that, and a smart team,” he says. Floating Wind-Power Designs Evolve Aikido’s design builds on many iterations tested by the growing floating wind industry. When Norwegian energy giant Equinor finished construction on the world’s first floating wind farm in 2017, it kept the turbines upright with ballasted steel columns extending 78 meters into the water—a design called a spar platform. This gave it a dense mass like the keel of a boat. Since then, the floating wind industry has largely coalesced around a semisubmersible design based on oil and gas platforms. Semisubmersibles don’t go as deep as spar platforms; instead, they extend buoyancy horizontally. Anchors, chains, and ropes keep the platform floating within a certain radius. Aikido is taking the semisubmersible approach. Its football-field-size platform holds the turbine in the center, and three legs extend tripod-like outward, like a Christmas-tree stand. At the end of each leg is a ballast that reaches 20 meters deep. This holds tanks largely filled with fresh water to maintain the platform’s buoyancy in the salty ocean. The data centers will go in the upper part of each ballast tank. There’s room for a 3- to 4-MW data hall in each tank, giving the platform a combined compute of 10 to 12 MW. Below the data halls is an open chamber used as a safety barrier, and below that sit the freshwater tanks. The water is piped up to the data center for liquid cooling of the servers. The warmed water is then funneled back down the ballast into the tank. There, proximity to the cold ocean water cools it again as the heat is conducted out through the tank’s steel walls. “We have this power from the wind. We have free cooling. We think we can be quite cost competitive compared to conventional data-center solutions,” says Aikido CEO Sam Kanner. “This crunch in the next five years is an opportunity for us to prove this out and supply AI compute where it’s needed.” One challenge, he says, is that liquid cooling can’t cover all the data center’s needs. For example, heat generated from Ethernet switches that connect the GPUs can’t be liquid-cooled with commercially available technology. So Aikido installed an air-conditioning method for that. Another challenge is the marine environment, which is “pretty brutal to engineer around because there’s the increased salinity, there’s debris, and there’s various kinds of corrosion and fouling of metal piping that you wouldn’t have in a freshwater environment,” says Daniel King, a research fellow at the Foundation for American Innovation in Washington who focuses on AI infrastructure. Offshore Data Centers Face Challenges Aikido’s plan avoids the prickly not-in-my-backyard complaints that are dogging both onshore wind and data-center projects. It might also circumvent some inquiries into water usage and power demand too, or so Aikido’s thinking goes. But it might not be that easy. “Instinctively many people reach for offshore or even orbital outer-space data centers as a way to circumvent the typical burdens of environmental reviews,” says King. “But there could be more or additional requirements around discharging heat and the effects that has on marine life that are different from the considerations of a terrestrial data center. It’s unclear to me whether this actually makes life easier or harder for a developer.” Prefabricated data halls could be installed quayside, followed by final electrical and plumbing connections to commission the data center.Aikido Aikido’s “design choice to use the fresh water in the ballast as a working fluid is a novel one” that, thanks to the closed-loop system, may “alleviate some of the engineering problems you see when a really high temperature fluid is pumping its heat directly into a marine environment,” King says. Offshore sites are also vulnerable to sabotage, King notes. Since Russia’s invasion of Ukraine, fleets of vessels directed by the Kremlin have reportedly started messing with offshore wind and communications infrastructure in northern Europe. Russian and Chinese boats have allegedly cut subsea cables in recent years. But vandalism is a risk anywhere, including at conventional data centers, Aikido CEO Kanner notes. Unlike those on land, where the local police have jurisdiction, Aikido’s data centers would enjoy protection from national coast guards, which he suggests gives an added degree of security. North Sea Hosts Clean Energy Kanner first began thinking about offshore wind turbines as a place to build data centers after a chance phone call with a cryptocurrency billionaire. The financier wanted to know whether turbines in international waters could power servers generating digital tokens at a moment when crypto-mining faced increased scrutiny from regulators. The talks fizzled. But that encounter sparked Kanner’s curiosity about how to use power generated onboard floating turbines. When ChatGPT emerged in 2022 and sparked a heated debate over how to power and cool such technology, the idea to put the data center in the floating turbine clicked for Kanner. The idea really congealed after he met with the chief executive of Portland, Ore.–based Panthalassa. The wave-energy company was proposing to enclose small, remote data centers in buoys attached to equipment that generates power from the surf. Panthalassa just completed its full-scale prototype tests off the coast of Washington state last summer. At that point, Aikido had already designed a modular platform for floating wind turbines. Each platform consists of 13 major steel components that are snapped together with pin joints—like IKEA furniture. The platforms fold up in a flat configuration that takes up roughly half the space of other designs, allowing it to be transported by a wider range of ships, according to Aikido. From there, it was a matter of figuring out how to accommodate a data center in the unused space. Aikido’s prototype will use a refurbished Vesta V-17 turbine. It will need onboard batteries for backup power and will also be connected to the grid for additional power during seasons with less wind. Aikido envisions eventually sprinkling its data centers among large arrays of offshore turbines to tap into that larger power infrastructure. Between Russia’s threat to expand its war in Ukraine to EU countries and the Trump administration’s bid to pressure Denmark into ceding sovereignty of Greenland to Washington, Europe is scrambling to build up its own energy production and AI capabilities. The North Sea, increasingly, looks like a primary theater of that effort. In January, nearly a dozen European nations banded together in a pact to transform the North Sea into a “reservoir” of clean power from offshore wind.
  • Countdown to IEEE’s Annual Election
    Mar 03, 2026 11:00 AM PST
    This year’s annual election, which begins on 17 August, will include candidates for IEEE president-elect and other officer positions up for election. To see who is running for 2027 IEEE president-elect and the petition candidates, visit the election website. The ballot also includes nominees for delegate-elect/director-elect offices submitted by division and region nominating committees, as well as IEEE Technical Activities vice president-elect; IEEE-USA president-elect; and IEEE Standards Association board of governors members-at-large. Those elected take office on 1 January 2027. IEEE members who want to run for an office, except for IEEE president-elect, who have not been nominated, must submit their petition intention to the IEEE Board of Directors by 1 April. Petitions should be sent to the IEEE Corporate Governance staff at elections@ieee.org. The petition intention deadline for IEEE president-elect was 31 December. Election Updates Regional elections will also take place. Eligible voting members in IEEE Region 1 (Northeastern U.S.) and Region 2 (Eastern U.S.) will elect the future IEEE Region 2 delegate-elect/director-elect (Eastern and Northeastern U.S.) for the 2027—2028 term. Members in the future IEEE Region 10 (North Asia) will elect the IEEE Region 10 delegate-elect/director-elect for the same term. These changes reflect IEEE’s upcoming region realignment, as outlined in The Institute’s September 2024 article, “How Region Realignment Will Impact IEEE Elections.” Beginning this year, only professional members will be eligible to vote in IEEE’s annual election or sign related petitions. Ballots will be created for eligible voting members on record as of 31 March. To ensure voting eligibility, all members should review and update their contact information and communication preferences by that date. To support sustainability initiatives, the “Candidate Biographies and Statements” booklet will no longer be available in print. Members can access the candidate biographies and statements within their electronic ballot, view them on the annual election website, or download the digital booklet. Members are also encouraged to vote electronically. For more information about the offices up for election, the process for getting on the annual ballot, and deadlines, visit the website or email elections@ieee.org.
  • Optimizing a Battery Electric Vehicle Thermal Management System
    Mar 03, 2026 03:00 AM PST
    This webinar looks at a Battery Electric Virtual Vehicle Model of a mid-size BEV, and uses Simulink and Simscape to facilitate design exploration, component refinement, and system-level optimization. The virtual vehicle comprises five subsystems: Electric powertrain, driveline, refrigerant cycle, coolant cycle, and passenger cabin. The model will be tested using different drive cycles, cooling, and heating scenarios. The results will be analyzed to determine the impact of the different design parameters on vehicle consumption. The resulting virtual vehicle will be used to: Test different drive cycles and environmental conditions Perform sensitivity analysis Optimize model to improve thermal performance and consumption Register now for this free webinar!
  • Watershed Moment for AI–Human Collaboration in Math
    Mar 02, 2026 10:00 AM PST
    When Ukrainian mathematician Maryna Viazovska received a Fields Medal—widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today, in a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to assist with mathematical research. “These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc Liam Fowl, who was not involved in the work. In her Fields Medal–winning research, Viazovska had tackled two versions of the sphere-packing problem, which asks: How densely can identical circles, spheres, et cetera, be packed in n-dimensional space? In two dimensions, the honeycomb is the best solution. In three dimensions, spheres stacked in a pyramid are optimal. But after that, it becomes exceedingly difficult to find the best solution, and to prove that it is in fact the best. In 2016, Viazovska solved the problem in two cases. By using powerful mathematical functions known as (quasi-)modular forms, she proved that a symmetric arrangement known as E8 is the best 8-dimensional packing, and soon after proved with collaborators that another sphere packing called the Leech lattice is best in 24 dimensions. Though seemingly abstract, this result has potential to help solve everyday problems related to dense sphere packing, including error-correcting codes used by smartphones and space probes. The proofs were verified by the mathematical community and deemed correct, leading to the Fields Medal recognition. But formal verification—the ability of a proof to be verified by a computer—is another beast altogether. Since 2022, much progress has been made in AI-assisted formal proof verification. Serendipity leads to formalization project A few years later, a chance meeting in Lausanne, Switzerland, between third-year undergraduate Sidharth Hariharan and Viazovska would reignite her interest in sphere-packing proofs. Though still very early in his career, Hariharan was already becoming adept at formalizing proofs. “Formal verification of a proof is like a rubber stamp,” Fowl says. “It’s a kind of bona fide certification that you know your statements of reasoning are correct.” Hariharan told Viazovska how he had been using the process of formalizing proofs to learn and really understand mathematical concepts. In response, Viazovska expressed an interest in formalizing her proofs, largely out of curiosity. From this, in March 2024 the Formalising Sphere Packing in Lean project was born. Lean is a popular programming language and “proof assistant” that allows mathematicians to write proofs that are then verified for absolute correctness by a computer. A collaboration formed to write a human-readable “blueprint” that could be used to map the 8-dimensional proof’s various constituents and figure out which of them had and had not been formalized and/or proven, and then prove and formalize those missing elements in Lean. “We had been building the project’s repository for about 15 months when we enabled public access in June 2025,” recalls Hariharan, now a first-year Ph.D. student at Carnegie Mellon University. “Then, in late October we heard from Math, Inc. for the first time.” The AI speedup Math, Inc. is a startup developing Gauss, an AI specifically designed to automatically formalize proofs. “It’s a particular kind of language model called a reasoning agent that’s meant to interleave both traditional natural-language reasoning and fully formalized reasoning,” explains Jesse Han, Math, Inc. CEO and cofounder. “So it’s able to conduct literature searches, call up tools, and use a computer to write down Lean code, take notes, spin up verification tooling, run the Lean compiler, et cetera.” Math, Inc. first hit the headlines when it announced that Gauss had completed a Lean formalization of the strong prime number theorem (PNT) in three weeks last summer, a task that Fields Medalist Terence Tao and Alex Kontorovich had been working on. Similarly, Math, Inc. contacted Hariharan and colleagues to say that Gauss had proven several facts related to their sphere-packing project. “They told us that they had finished 30 “sorrys,” which meant that they proved 30 intermediate facts that we wanted proved,” explains Hariharan. A proportion of these sorrys were shared with the project team and merged with their own work. “One of them helped us identify a typo in our project, which we then fixed,” adds Hariharan. “So it was a pretty fruitful collaboration.” From 8 to 24 dimensions But then, radio silence followed. Math, Inc. appeared to lose interest. However, while Hariharan and colleagues continued their labor of love, Math, Inc. was building a new and improved version of Gauss. “We made a research breakthrough sometime mid-January that produced a much stronger version of Gauss,” says Han. “This new version reproduced our three-week PNT result in two to three days.” Days later, the new Gauss was steered back to the sphere-packing formalization. Working from the invaluable preexisting blueprint and work that Hariharan and collaborators had shared, Gauss not only autoformalized the 8-dimensional case, but also found and fixed a typo in the published paper, all in the space of five days. “When they reached out to us in late January saying that they finished it, to put it very mildly, we were very surprised,” says Hariharan. “But at the end of the day, this is technology that we’re very excited about, because it has the capability to do great things and to assist mathematicians in remarkable ways.” Hariharan was working on sphere-packing proof verification as the sun was setting behind Carnegie Mellon’s Hamerschlag Hall.Sidharth Hariharan The 8-dimensional sphere-packing proof formalization alone, announced on February 23, represents a watershed moment for autoformalization and AI–human collaboration. But today, Math, Inc. revealed an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks. There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han. Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI. But for Han, it represents even more: the beginning of a revolutionary transformation in mathematics, where extremely large-scale formalizations are commonplace. “A programmer used to be someone who punched holes into cards, but then the act of programming became separated from whatever material substrate was used for recording programs,” he concludes. “I think the end result of technology like this will be to free mathematicians to do what they do best, which is to dream of new mathematical worlds.”
  • How Electrical Engineers Fight a War
    Mar 02, 2026 06:00 AM PST
    Every time Russia attacks Ukraine’s power infrastructure, Ukrainian engineers risk their lives in the scramble to get electricity flowing again. It’s a dangerous job at best, and a lethal one at worst. It also requires creativity. Time pressure and equipment shortages make it nearly impossible to rebuild things exactly as they were, so engineers must redesign on the fly. These dangerous, stressful conditions have led to more engineers being hurt or killed. The rate of injuries among Ukrainian workers in electricity generation, transmission, and distribution jumped nearly 50 percent after Russia’s full-scale invasion began four years ago, according to data provided by Antonina Nagorna, who leads the Department of Epidemiology and Physiology of Work at the Kundiiev Institute of Occupational Health, in Kiev. By her count at least 48 people had died on the job through the end of 2025, either while repairing damage or during the bombardment itself. Transmission mastermind Oleksiy Brecht joined that grim count in January. Brecht, who was director for network operations and development at the Ukrainian grid operator Ukrenergo, died while coordinating work at Ukraine’s most attacked electrical switchyard, Kyivska, west of the capital. He was 47 years old. Brecht’s life and death are a window into the realities of thousands of Ukrainian engineers who face conditions beyond what most engineers could imagine. “The war completely transformed the professional life of a top-manager engineer,” says Mariia Tsaturian, an energy analyst and chief communication officer at the think tank Ukraine Facility Platform, who previously worked with Brecht at Ukrenergo. “As for junior staff, their world was turned upside down entirely. A substation engineer working under shelling is something no one had ever seen or experienced before,” she says. How Russia Attacks Ukraine’s Grid Over the course of the war, Russia has increasingly focused on destroying Ukraine’s energy infrastructure. It sends attack drones almost daily during the winter there, when heat and electricity is needed most to survive the bitter cold. Every 10 days or so it barrages Ukraine’s power system with combinations of missiles and hundreds of drones, repeatedly mangling equipment and cutting off power. The cold imposed on Ukrainian homes is especially hard on former prisoners of war held in Russia, where cold is routinely employed as a form of torture. In the first two years of the war, keeping the grid flowing was a 24/7 job. But Ukrenergo has adapted to the impossible since then, says Vitaliy Zaychenko, Ukrenergo’s CEO, who somehow found a moment to speak with IEEE Spectrum via video call. Now, “we are more prepared for each attack. We have well-trained teams. We have support from Europe,” he says. But the risk involved in repairing the grid remains unnerving. Last month a crew from DTEK, Ukraine’s biggest private-sector energy firm, was traveling between locations when it was targeted by a Russian drone. They heard the drone coming and escaped before their bucket truck was destroyed. Russian forces have employed “double tap” attacks against DTEK’s crews, targeting their power infrastructure with a follow-up strike designed to kill first responders—a practice confirmed by the U.N. When Russia began targeting power infrastructure in October 2022, Brecht’s job shifted from high-level direction of grid planning and maintenance to near-constant triage and real-time system reengineering. Most weeks, Brecht spent several days in the field, crisscrossing the country to coordinate work at smashed substations. Brecht would often be found on site figuring out how to restart power using whatever equipment was available. “It was a unique decision every time,” says Zaychenko. Oleksiy Brecht died in January while overseeing repairs to a bombed-out substation near Kyiv. He called his employees at Ukrenergo “my fighters. They called him “our general.”Ukrenergo Zaychenko noted Brecht’s “genius” for finding creative grid fixes, his passion and leadership skills, and his credibility with power brokers in Ukraine and abroad. Brecht scoured the globe sourcing critical replacement parts, including stockpiled or older equipment from international utilities. Transformers, which can take a year or more to source, are especially precious. When the right equipment wasn’t forthcoming, Brecht figured out how to make do. For example, he would deploy transformers from Western Europe rated for 400 kilovolts to restart a 330-kV circuit. He would adapt transformers designed for 60-hertz alternating current for emergency use on Ukraine’s 50-Hz grid. “He would find a way,” says Zaychenko, who worked closely with Brecht for over 20 years. Brecht’s assistant at Ukrenergo, Svitlana Dubas-Veremiienko, says he also contributed to the teams’ morale and confidence. She shared on Facebook that he smoked “like a locomotive” at the worst times, and yet exuded calm: “In his presence, chaos subsided,” she wrote. Brecht was not easy to intimidate. “He was someone who never feared anything or anyone,” adds Tsaturian. Brecht’s work proved so essential that Ukrenergo’s former Deputy CEO Andrii Nemyrovskyi recalls telling Ukraine’s Ministry of Defense in 2022 that the military must protect two people: Zaychenko, because he ran grid operations, and Brecht because “system operations requires that the system exists.” Last week, President Zelenskyy posthumously named Brecht a “Hero of Ukraine” for “strengthening the energy security of Ukraine under martial law.” Ukraine’s Power Infrastructure Under Fire Brecht joined Ukrenergo in 2002 after earning his degree in power engineering from Igor Sikorsky Kyiv Polytechnic Institute. Over the next 20 years, he held leadership positions in dispatching and grid planning and development. He joined Ukrenergo’s management board in June 2022 and served as its interim leader in 2024. Brecht’s contributions to Ukraine’s wartime survival began with several key upgrades to Ukrenergo’s technical capabilities ahead of the February 2022 invasion. He reintroduced “live line” techniques, providing training and equipment that enable crews to work on circuits while they continue to carry power to homes and to sustain critical needs. Brecht also led preparations for Ukraine’s disconnection from the Russian grid and synchronization with Europe’s. When the invasion began, Ukraine’s Minister of Energy at the time, Herman Halushchenko, had argued that switching from Russia’s grid to Europe’s was too risky, according to Tsaturian and Nemyrovskyi. But Brecht insisted—correctly, as hindsight has shown—that synchronizing with Europe would provide crucial stability and backup power. At his urging, the switch was completed in daring fashion during the first weeks of the invasion. (Halushchenko was dismissed last year following longstanding allegations of corruption and Russian influence in Ukraine’s energy sector that gave way to indictments in November 2025 that have rocked President Zelenskyy’s government. In January, Halushchenko was detained while attempting to leave the country and charged with money laundering.) DTEK workers conduct repairs on 26 January following a Russian attack in Kyiv.Danylo Antoniuk/Cover Images/AP A Ukrainian Electrical Engineer’s Final Day Brecht’s final act of service followed the mass destruction of January 19—a day when Kyiv’s high temperature was –10° C. That night, Russian forces targeted Ukraine’s energy infrastructure with 18 ballistic missiles, a hypersonic cruise missile, 15 conventional cruise missiles, and 339 drones. The impact included catastrophic damage at the 750-kV Kyivska substation, which feeds electricity to the capital and ensures cooling power for two nuclear power plants. Brecht was leading a team of about 100 people who were undoing the damage when he made a deadly choice. He picked up a section of busbar—solid conduits that connect circuits within substations. It had been blasted to the ground and, unbeknownst to Brecht, was carrying lethal voltage. It’s unclear whether its circuit was still connected, or if it had picked up voltage from another circuit. Zaychenko says an investigation is ongoing to provide answers. “I don’t know why he touched this busbar. Maybe because of tiredness. Maybe something else,” he says. “He was trying to help the team to do this job quickly. It was a huge mistake and a huge loss for us.”
  • How Quantum Data Can Teach AI to Do Better Chemistry
    Mar 02, 2026 05:00 AM PST
    Sometimes a visually compelling metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named John P. Perdew came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “Jacob’s Ladder.” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.” Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder, using increasingly more intensive mathematics and compute power, descriptions of atomic reality became more precise. And at the very top, nature was perfectly described via impossibly intensive computation—something like what God might see. With this metaphor in mind, we propose to extend Jacob’s Ladder beyond Perdew’s version, to encompass all computational approaches to simulating the behavior of electrons. And instead of climbing rung by rung toward an unreachable summit, we have an idea to bend the ladder so that even the very top lies within our grasp. Specifically, we at Microsoft envision a hybrid approach. It starts with using quantum computers to generate exquisitely accurate data about the behavior of electrons—data that would be prohibitively expensive to compute classically. This quantum-generated data will then train AI models running on classical machines, which can predict the properties of materials with remarkable speed. By combining quantum accuracy with AI-driven speed, we can ascend Jacob’s Ladder faster, designing new materials with novel properties and at a fraction of the cost. At the base of Jacob’s Ladder are classical models that treat atoms as simple balls connected by springs—fast enough to handle millions of atoms over long times but with the lowest precision. Moving up along the black line, semiempirical methods add some quantum mechanical calculations. Next are approximations based on Hartree-Fock (HF) and density functional theory (DFT), which include full quantum behavior of individual electrons but model their interactions in an averaged way. The greater accuracy requires significant computing power, which limits them to simulating molecules with no more than a few hundred atoms. At the top are coupled-cluster and full configuration interaction (FCI) methods—exquisitely accurate but, at the moment, restricted to tiny molecules or subsets of electrons due to the large computational costs involved. Quantum computing can bend the accuracy-versus-cost curve at the top of Jacob’s Ladder [orange line], making highly accurate calculations feasible for large systems. AI, trained on this quantum-accurate data, can flatten this curve [purple line], enabling rapid predictions for similar systems at a fraction of the cost of classical computing.Source: Microsoft Quantum In our approach, the base of Jacob’s Ladder still starts with classical models that treat atoms as simple balls connected by springs—models that are fast enough to handle millions of atoms over long times, but with the lowest precision. As we ascend the ladder, some quantum mechanical calculations are added to semiempirical methods. Eventually, we’ll get to the full quantum behavior of individual electrons but with their interactions modeled in an averaged way; this greater accuracy requires significant compute power, which means you can only simulate molecules of no more than a few hundred atoms. At the top will be the most computationally intensive methods—prohibitively expensive on classical computers but tractable on quantum computers. In the coming years, quantum computing and AI will become critical tools in the pursuit of new materials science and chemistry. When combined, their forces will multiply. We believe that by using quantum computers to train AI on quantum data, the result will be hyperaccurate AI models that can reach ever higher rungs of computational complexity without the prohibitive computational costs. This powerful combination of quantum computing and AI could unlock unprecedented advances in chemical discovery, materials design, and our understanding of complex reaction mechanisms. Chemical and materials innovations already play a vital—if often invisible—role in our daily lives. These discoveries shape the modern world: new drugs to help treat disease more effectively, improving health and extending life expectancy; everyday products like toothpaste, sunscreen, and cleaning supplies that are safe and effective; cleaner fuels and longer-lasting batteries; improved fertilizers and pesticides to boost global food production; and biodegradable plastics and recyclable materials to shrink our environmental footprint. In short, chemical discovery is a behind-the-scenes force that greatly enhances our everyday lives. The potential is vast. Anywhere AI is already in use, this new quantum-enhanced AI could drastically improve results. These models could, for instance, scan for previously unknown catalysts that could fix atmospheric carbon and so mitigate climate change. They could discover novel chemical reactions to turn waste plastics into useful raw materials and remove toxic “forever chemicals” from the environment. They could uncover new battery chemistries for safer, more compact energy storage. They could supercharge drug discovery for personalized medicine. And that would just be the beginning. We believe quantum-enhanced AI will open up new frontiers in materials science and reshape our ability to understand and manipulate matter at its most fundamental level. Here’s how. How Quantum Computing Will Revolutionize Chemistry To understand how quantum computing and AI could help bend Jacob’s Ladder, it’s useful to look at the classical approximation techniques that are currently used in chemistry. In atoms and molecules, electrons interact with one another in complex ways called electron correlations. These correlations are crucial for accurately describing chemical systems. Many computational methods, such as density functional theory (DFT) or the Hartree-Fock method, simplify these interactions by replacing the intricate correlations with averaged ones, assuming that each electron moves within an average field created by all other electrons. Such approximations work in many cases, but they can’t provide a full description of the system. A joint project between Microsoft and Pacific Northwest National Laboratory used AI and high-performance computing to identify potential materials for battery electrolytes. The most promising were synthesized [top and middle] and tested [bottom] at PNNL. Dan DeLong/Microsoft Electron correlation is particularly important in systems where the electrons are strongly interacting—as in materials with unusual electronic properties, like high-temperature superconductors—or when there are many possible arrangements of electrons with similar energies—such as compounds containing certain metal atoms that are crucial for catalytic processes. In these cases, the simplified approach of DFT or Hartree-Fock breaks down, and more sophisticated methods are needed. As the number of possible electron configurations increases, we quickly reach an “exponential wall” in computational complexity, beyond which classical methods become infeasible. Enter the quantum computer. Unlike classical bits, which are either on or off, qubits can exist in superpositions—effectively coexisting in multiple states simultaneously. This should allow them to represent many electron configurations at once, mirroring the complex quantum behavior of correlated electrons. Because quantum computers operate on the same principles as the electron systems they will simulate, they will be able to accurately simulate even strongly correlated systems—where electrons are so interdependent that their behavior must be calculated collectively. AI’s Role in Advancing Computational Chemistry At present, even the computationally cheap methods at the bottom of Jacob’s Ladder are slow, and the ones higher up the ladder are slower still. AI models have emerged as powerful accelerators to such calculations because they can serve as emulators that predict simulation outcomes without running the full calculations. The models can speed up the time it takes to solve problems up and down the ladder by orders of magnitude. This acceleration opens up entirely new scales of scientific exploration. In 2023 and 2024, we collaborated with researchers at Pacific Northwest National Laboratory (PNNL) on using advanced AI models to evaluate over 32 million potential battery materials, looking for safer, cheaper, and more environmentally friendly options. This enormous pool of candidates would have taken about 20 years to explore using traditional methods. And yet, within less than a week, that list was narrowed to 500,000 stable materials and then to 800 highly promising candidates. Throughout the evaluation, the AI models replaced expensive and time-consuming quantum chemistry calculations, in some cases delivering insights half a million times as fast as would otherwise have been the case. We then used high-performance computing (HPC) to validate the most promising materials with DFT and AI-accelerated molecular dynamics simulations. The PNNL team then spent about nine months synthesizing and testing one of the candidates—a solid-state electrolyte that uses sodium, which is cheap and abundant, and some other materials, with 70 percent less lithium than conventional lithium-ion designs. The team then built a prototype solid-state battery that they tested over a range of temperatures. This potential battery breakthrough isn’t unique. AI models have also dramatically accelerated research in climate science, fluid dynamics, astrophysics, protein design, and chemical and biological discovery. By replacing traditional simulations that can take days or weeks to run, AI is reshaping the pace and scope of scientific research across disciplines. However, these AI models are only as good as the quality and diversity of their training data. Whether sourced from high-fidelity simulations or carefully curated experimental results, these data must accurately represent the underlying physical phenomena to ensure reliable predictions. Poor or biased data can lead to misleading outcomes. By contrast, high-quality, diverse datasets—such as those full-accuracy quantum simulations—enable models to generalize across systems and uncover new scientific insights. This is the promise of using quantum computing for training AI models. How to Accelerate Chemical Discovery The real breakthrough will come from strategically combining quantum computing’s and AI’s unique strengths. AI already excels at learning patterns and making rapid predictions. Quantum computers, which are still being scaled up to be practically useful, will excel at capturing electron correlations that classical computers can only approximate. So if you train classical models on quantum-generated data, you’ll get the best of both worlds: the accuracy of quantum delivered at the speed of AI. As we learned from the Microsoft-PNNL collaboration on electrolytes, AI models alone can greatly speed up chemical discovery. In the future, quantum-accurate AI models will tackle even bigger challenges. Consider the basic discovery process, which we can think of as a funnel. Scientists begin with a vast pool of candidate molecules or materials at the wide-mouthed top, narrowing them down using filters based on desired properties—such as boiling point, conductivity, viscosity, or reactivity. Crucially, the effectiveness of this screening process depends heavily on the accuracy of the models used to predict these properties. Inaccurate predictions can create a “leaky” funnel, where promising candidates are mistakenly discarded or poor ones are mistakenly advanced. Quantum-accurate AI models will dramatically improve the precision of chemical-property predictions. They’ll be able to help identify “first-time right” candidates, sending only the most promising molecules to the lab for synthesis and testing—which will save both time and cost. Another key aspect of the discovery process is understanding the chemical reactions that govern how new substances are formed and behave. Think of these reactions as a network of roads winding through a mountainous landscape, where each road represents a possible reaction step, from starting materials to final products. The outcome of a reaction depends on how quickly it travels down each path, which in turn is determined by the energy barriers along the way—like mountain passes that must be crossed. To find the most efficient route, we need accurate calculations of these barrier heights, so that we can identify the lowest passes and chart the fastest path through the reaction landscape. Even small errors in estimating these barriers can lead to incorrect predictions about which products will form. Case in point: A slight miscalculation in the energy barrier of an environmental reaction could mean the difference between labeling a compound a “forever chemical” or one that safely degrades over time. Accurate modeling of reaction rates is also essential for designing catalysts—substances that speed up and steer reactions in desired directions. Catalysts are crucial in industrial chemical production, carbon capture, and biological processes, among many other things. Here, too, quantum-accurate AI models can play a transformative role by providing the high-fidelity data needed to predict reaction outcomes and design better catalysts. Once trained, these AI models, powered by quantum-accurate data, will revolutionize computational chemistry by delivering quantum-level precision. And once the AI models, which run on classical computers, are trained with quantum computing data, researchers will be able to run high-accuracy simulations on laptops or desktop computers, rather than relying on massive supercomputers or future quantum hardware. By making advanced chemical modeling more accessible, these tools will democratize discovery and empower a broader community of scientists to tackle some of the most pressing challenges in health, energy, and sustainability. Remaining Challenges for AI and Quantum Computing By now, you’re probably wondering: When will this transformative future arrive? It’s true that quantum computers still struggle with error rates and limited lifetimes of usable qubits. And they still need to scale to the size required for meaningful chemistry simulations. Meaningful chemistry simulations beyond the reach of classical computation will require hundreds to thousands of high-quality qubits with error rates of around 10-15, or one error in a quadrillion operations. Achieving this level of reliability will require fault tolerance through redundant encoding of quantum information in logical qubits, each consisting of hundreds of physical qubits, thus requiring a total of about a million physical qubits. Current AI models for chemical-property predictions may not have to be fully redesigned. We expect that it will be sufficient to start with models pretrained on classical data and then fine-tune them with a few results from quantum computers. Despite some open questions, the potential rewards in terms of scientific understanding and technological breakthroughs make our proposal a compelling direction for the field. The quantum computing industry has begun to move beyond the early noisy prototypes, and high-fidelity quantum computers with low error rates could be possible within a decade. Realizing the full potential of quantum-enhanced AI for chemical discovery will require focused collaboration between chemists and materials scientists who understand the target problems, experts in quantum computing who are building the hardware, and AI researchers who are developing the algorithms. Done right, quantum-enhanced AI could start to tackle the world’s toughest challenges—from climate change to disease—years ahead of anyone’s expectations.
  • What Military Drones Can Teach Self-Driving Cars
    Mar 02, 2026 04:00 AM PST
    Self-driving cars often struggle with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary. This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays. As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today. While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world. Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays. The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive. Five Lessons From Military Drone Operations Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars. Latency Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult. In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag. Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities. Workstation Design Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design. UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident. Human Factors Interface Design Procedure Design Army Hunter 47% 20% 20% Army Shadow 21% 80% 40% Air Force Predator 67% 38% 75% Air Force Global Hawk 33% 100% 0% Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile. The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades. Operator Workload Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors. When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research. Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed. The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult. Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective. Training Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program. Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards. Contingency Planning Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind. Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design. The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning. Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.
  • IEEE President’s Note: Engineering a Modern Renaissance
    Mar 01, 2026 11:00 AM PST
    Consider a powerful parallel between the advancements made during the Renaissance and the developments made by today’s engineers. The Renaissance was a uniquely fertile era. Its ethos of curiosity and creativity fostered unprecedented collaboration across disciplines. Artists, scientists, philosophers, and patrons engaged in a shared pursuit of human potential, beauty, and advancements in art, science, and literature. But the Renaissance wasn’t just a cultural awakening. It was a systems-level transformation: a convergence of disciplines, minds, and methods that redefined what humanity could achieve. And in many ways, it mirrors the collaborative spirit we strive for within our IEEE communities. Collaboration Is a Catalyst During the Renaissance, breakthroughs didn’t happen in isolation. They emerged from intersections of different disciplines. Collaboration was the norm: Artists worked with mathematicians to perfect their creations’ accuracy, and architects consulted astronomers to design buildings that reflected celestial order. It was interdisciplinary design thinking centuries before the concept was given a name. Who’s up for a Challenge? IEEE Impact Challenge, which launched in January, aims not only to address real-world problems through purpose-driven engineering but also to attract new members, foster cross‑disciplinary collaboration, and design a better world for all. The IEEE Future Tech Explorers program invites IEEE members to partner with others to inspire tomorrow’s engineers and technologists by creating interactive educational experiences that spark curiosity and open doors for young minds. The IEEE Response Quest seeks to find solutions that enable near-real-time situational awareness for those providing emergency response and relief assistance. We welcome educators, designers, engineers, and innovators from every technical discipline to come together, collaborate across communities, and demonstrate the power of IEEE when we unite around a shared purpose. Learn more at the IEEE Impact Challenge website. It is at the intersections where disciplines and communities meet that the sparks of transformation ignite. The intersection of engineering and medicine gives us lifesaving devices. The intersection of computing and art produces immersive experiences from virtual, augmented, and mixed reality technology that expands human imagination. The intersection of policy and technology ensures ethical innovation. The outcomes of these crossroads remind us that progress is rarely linear. It is woven from the threads of various expertise, perspectives, and values. When we collaborate across specialties, from electrical and biomedical to aerospace and software, we unlock new possibilities. And when we engage with industry, educators, policymakers, standard developers, and the public, we elevate those possibilities into solutions. We do it together, because no single engineer or technologist, and no one discipline can solve all the challenges we face. The Renaissance teaches us that collaboration is a catalyst for advancing society. And so, I ask: What if we are living in a new, modern renaissance? What if our members are today’s da Vincis, designing systems that serve humanity? What if our volunteers are modern-day patrons, investing time, talent, and heart into building a better world? What if our students and young professionals are the architects of tomorrow’s breakthroughs, fluent in computer code, ethics, and global impact, ready to collaborate across borders, sectors, and disciplines? What if our conferences, technical standards, and humanitarian technologies are the printing presses of our time, disseminating knowledge, sparking dialogue, and scaling solutions? What if our collective imagination is the canvas upon which the next century of innovation will be painted? And what if, like the Renaissance, our era is defined not only by invention but also by intersection, where many voices and perspectives converge to shape technologies that reflect humanity’s full spectrum? Imagine engineers working together with ethicists to ensure responsible AI; with environmental scientists to safeguard our planet; and with local communities to design solutions that solve their challenges. Also imagine engineers partnering with disaster relief agencies to design real-time systems, restore communication networks, and deliver lifesaving technologies when survivors need them most. So let us think like Renaissance creators. Let us design with empathy and collaborate across boundaries. Let us honor that legacy by not just preserving the past but also by building systems that empower the future for everyone. When we unite technical excellence with human purpose, we don’t just innovate; we elevate. And in doing so, we carry forward the timeless truth of the Renaissance: Humanity’s greatest achievements are born not from isolation but from intersection and connection. —Mary Ellen Randall IEEE president and CEO Please share your thoughts with me: president@ieee.org.
  • Letting Machines Decide What Matters
    Mar 01, 2026 03:00 AM PST
    In the time it takes you to read this sentence, the Large Hadron Collider (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the Standard Model of particle physics. For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As Matthew Hutson reports in “AI Hunts for the Next Big Thing in Physics,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there have been no big breakthroughs,” Hutson says. “There are key components of reality we’re completely missing.” That’s why researchers are turning artificial intelligence loose on particle physics. They aren’t simply asking AI to comb through accelerator data to confirm existing theories, Hutson explains. They’re asking AI to point the way toward theories that they’ve never imagined. “Instead of looking to support theories that humans have generated,” he says, “unsupervised AI can highlight anything out of the ordinary, expanding our reach into unknown unknowns.” By asking AI to flag anomalies in the data, researchers hope to find their way to “new physics” that extends the Standard Model. On the surface, this article might sound like another “AI for X” story. As IEEE Spectrum’s AI editor, I get a steady stream of pitches for such stories: AI for drug discovery, AI for farming, AI for wildlife tracking. Often what that really means is faster data processing or automation around the edges. Useful, sure, but incremental. What struck me in Hutson’s reporting is that this effort feels different. Instead of analyzing experimental data after the fact, the AI essentially becomes part of the instrument, scanning for subtle patterns and deciding in real time what’s interesting. At the LHC, detectors record 40 million collisions per second. There’s simply no way to preserve all that data, so engineers have always had to build filters to decide which events get saved for analysis and which are discarded; nearly everything is thrown away. Now those split-second decisions are increasingly handed to machine learning systems running on field-programmable gate arrays (FPGAs) connected to the detectors. The code must run on the chip’s limited logic and memory, and compressing a neural network into that hardware isn’t easy. Hutson describes one theorist pleading with an engineer, “Which of my algorithms fits on your bloody FPGA?” This moment is part of a much older pattern. As Hutson writes in the article, new instruments have opened doors to the unexpected throughout the history of science. Galileo’s telescope revealed moons circling Jupiter. Early microscopes exposed entire worlds of “animalcules” swimming around. These better tools didn’t just answer existing questions; they made it possible to ask new ones. If there’s a crisis in particle physics, in other words, it may not just be about missing particles. It’s about how to look beyond the limits of the human imagination. Hutson’s story suggests that AI might not solve the mysteries of the universe outright, but it could change how we search for answers.
  • Xiangyi Cheng Is Bringing AR to Classrooms and Hospitals
    Feb 28, 2026 11:00 AM PST
    When Xiangyi Cheng published her first journal paper as a principal investigator in IEEE Access in 2024, it marked more than a professional milestone. For Cheng, an IEEE member and an assistant professor of mechanical engineering at Loyola Marymount University, in Los Angeles, it was the latest waypoint in a career shaped by curiosity, persistence, and a belief that technology should serve people—not the other way around. The paper’s title was “Mobile Devices or Head-Mounted Displays: A Comparative Review and Analysis of Augmented Reality in Healthcare.” XIANGYI CHENG Employer Loyola Marymount University, in Los Angeles Title Assistant professor of mechanical engineering Member grade Member Alma maters China University of Mining and Technology; Texas A&M University Cheng’s work spans robotics, intelligent systems, human-machine interaction and artificial intelligence. It has applications in patient-specific surgical planning, an approach whereby treatment is customized to the anatomy and clinical needs of each individual. Her research also covers wearables for rehabilitation and augmented-reality-enhanced engineering education. The throughline of her career is sound judgment based on critical thinking. She urges her students to avoid the temptation to accept the answers they’re given by AI without cross-checking them against their own foundational understanding of the subject matter. “AI can give you ideas,” Cheng says, “but it should never lead your thinking.” That principle—honed through uncertainty, disciplinary shifts, and hard-earned confidence—has made Cheng an emerging voice in applied intelligent systems and a thoughtful educator preparing students for an AI-saturated world. From Xi’an to Beijing: A mind drawn to mathematics Cheng, born in Xi’an, China, grew up in a household shaped by her parents’ disparate careers. Her father was a mining engineer, and her mother taught Chinese and literature at a high school. “That contrast between logical and literary thinking helped me understand myself early,” Cheng says. “I liked math, and STEM felt natural to me.” Several teachers reinforced her inclination, she says, particularly a math teacher whose calm, fair approach emphasized reasoning over punishments such as detention for misbehavior or failure to complete assignments. “It wasn’t about being right,” Cheng says. “It was about thinking clearly.” In 2011 she enrolled at the China University of Mining and Technology (Beijing) , where she studied mechanical engineering. After graduating with a bachelor’s degree in 2015, she was unsure where the field would take her. An IEEE paper changed her trajectory Later in 2015, she traveled to the United States to study at Case Western Reserve University, in Cleveland. She initially viewed the move as exploratory rather than a long-term commitment. “I wasn’t thinking about a Ph.D.,” she says. “I wasn’t even sure research was for me.” That uncertainty shifted in 2017, when Cheng submitted her “IntuBot: Design and Prototyping of a Robotic Intubation Device” paper to the IEEE International Conference on Robotics and Automation (ICRA)—which was accepted. “AI can give you more possibilities, but thinking is still our responsibility.” Intubation is a procedure in which an endotracheal tube is inserted into a patient’s airway—usually through the mouth—to help them breathe. Because placing the tube correctly is not simple and usually must be done quickly, it requires training. That’s why research into robotic or assisted intubation systems focuses on improving speed, accuracy, and safety. She presented her findings at ICRA in 2018, giving her early exposure to a global research community. “That acceptance gave me confidence,” she recalls. “It showed me I could contribute to the field.” Her advisor at Case Western encouraged her to switch from the mechanical engineering master’s program to the Ph.D. track. When the advisor moved to Texas A&M University, in College Station, in 2019, Cheng decided to transfer. She completed her Ph.D. in mechanical engineering at Texas A&M in 2022. Although she didn’t earn a degree from Case Western, she credits her experience there with clarifying her professional direction. Shortly after graduating with her Ph.D., Cheng was hired as an assistant professor of mechanical engineering at Ohio Northern University, in Ada. She left in 2024 to become an assistant professor at Loyola Marymount. Engineering for the body—and the classroom Cheng’s research focuses on human-centered engineering, particularly in health care. One of her major projects addresses syndactyly, a congenital condition in which a newborn’s fingers are fused at birth. Surgeons rely on their experience to estimate the size and shape of skin grafts to be taken from another part of the body for the corrective surgery. She is developing technology to scan the patient’s hand, extract anatomical landmarks, and use finite element analysis—a computer-based method for predicting how a physical object will behave under real-world conditions—to determine the optimal graft size and shape. Xiangyi Cheng designs human-centered intelligent systems with applications in health care and education.Xiangyi Cheng “Everyone’s hand is different,” Cheng says. “So the surgery should be personalized.” Another project centers on developing smart gloves to assist with hand rehabilitation, pairing the unaffected hand with the injured one so the person’s natural motion can help guide therapy. She also is exploring augmented reality in engineering education, using immersive visualization and AI tools to help students grasp three-dimensional concepts that are difficult to convey through traditional learning tools. Such visualization lets students see and interact with a digital world as if they’re inside it instead of viewing it on a flat screen. Teaching balance in an AI-driven world Despite working at the forefront of AI-enabled systems, Cheng cautions her students to be judicious in their use of the technology so that they don’t rely on it too heavily. “AI is not always right and perfect,” she says. “You still need to be able to judge whether the answers it provides are correct.” As AI continues to reshape engineering, Cheng remains grounded in a simple principle, she says: “We should use these tools. But we should never let them replace our judgment. AI can give you more possibilities, but thinking is still our responsibility.” In her lab and classroom, Cheng prioritizes independent thinking, critical evaluation, and persistence. Many of her research students are undergraduates, and she encourages them to take ownership of their work—planning ahead, testing ideas, and learning from failure. “The students who succeed don’t give up easily,” she says. What she finds most rewarding, she says, is watching students mature. Reserved first-year students often become confident seniors who can present complex work and manage demanding projects. “Getting to witness that transformation is why I teach,” she says. For students considering engineering, Cheng offers straightforward advice: “Focus on mathematics. Engineering looks hands-on, but math is the foundation behind everything.” With practice and persistence, she says, students can succeed and find meaning in the field. Why IEEE continues to matter Cheng joined IEEE in 2017, the year she submitted her first paper to ICRA. The organization has remained central to her professional development, she says. She has served as a reviewer for IEEE journals and conferences including Robotics and Automation Letters, Transactions on Medical Robotics and Bionics, Transactions on Robotics, the International Conference on Intelligent Robots and Systems, and ICRA. IEEE’s interdisciplinary scope aligns naturally with her work, she says, adding that the organization is “one of the few places that truly welcomes research across boundaries.” More personally, IEEE helped her see a future she had not initially imagined. “That first conference was a turning point,” she says. “It helped me realize I belonged.”
  • This Power Grid Pioneer’s EV Prediction Came 100 Years Too Soon
    Feb 28, 2026 06:00 AM PST
    Charles Proteus Steinmetz was a towering figure in the early decades of electrical engineering, easily the intellectual equal of Thomas Edison and Nikola Tesla—men he considered his friends. One of Steinmetz’s most significant achievements was to quantify and characterize the phenomenon of magnetic hysteresis—the behavior of magnetism in materials—and then devise a simple law that allowed for predictable transformer and motor design. He also established a revolutionary framework for analyzing AC circuits, which is still taught today in power engineering. And from 1893, he served as chief consulting engineer at General Electric at a pivotal moment for the young company and for the U.S. effort to expand its power grid. For these and other accomplishments, he was well known in his time, even if he’s not exactly a household name today. Steinmetz was also an evangelist for electric vehicles. In March 1920, he typed out his thoughts, comparing the pros and cons of EVs to the gasoline-propelled alternative. Among the advantages: low cost of maintenance, reliability, simplicity of operation, and lower cost of operation. The disadvantages: dependence on charging stations, limited range on a single charge, and lower speeds. More than a century later, his list remains remarkably pertinent. Steinmetz could often be seen decked out in a suit and top hat, smoking his trademark BlackStone panatela cigar while riding around Schenectady, N.Y., in his 1914 Detroit Electric sedan. According to John Spinelli, emeritus professor of electrical and computer engineering at Union College, in Schenectady, sometimes both Steinmetz and his chauffeur sat in the backseat—you could control the car from both the front and the rear—so that it would appear to be a driverless car. With a top speed of 40 kilometers per hour (25 miles per hour), the car ran on 14 six-volt batteries and could go about 48 km between charges. Steinmetz’s 1914 Detroit Electric car is now at Union College in Schenectady, N.Y., where Steinmetz had founded, chaired, and taught in the department of electrical engineering.Paul Buckowski/Union College In 1971, the car was purchased by Union College, where Steinmetz had founded, chaired, and taught in the department of electrical engineering. The car had been discovered rotting in a field, so it needed some work. Over the next decade, faculty and engineering students restored it to its former glory. Still in running condition, it’s now on permanent display at the college. Steinmetz’s Contributions to Electrical Engineering Karl August Rudolf Steinmetz was born in 1865 in Breslau, Prussia (now known as Wrocław, Poland). He studied mathematics, physics, and the burgeoning field of electricity at the University of Breslau. He also joined a student socialist club and edited the party newspaper, The People’s Voice. He completed his doctoral studies, but before receiving his degree, Steinmetz fled to Switzerland in 1888, after his socialist writings came under the scrutiny of the Bismarck government. Steinmetz immigrated to New York the following year, anglicized his first name, dropped his two middle names, and added Proteus, a nickname he had picked up at university (after the shape-shifting sea god of Greek mythology). Eventually, he became a U.S. citizen. Charles Proteus Steinmetz solved a number of important problems that helped the power grid expand.Bettmann/Getty Images In January 1892, Steinmetz burst onto the engineering scene when he read his paper “On the Law of Hysteresis” before the American Institute of Electrical Engineers, a forerunner of today’s IEEE. I can’t quite imagine sitting through the delivery of its 62 pages, but those assembled recognized its groundbreaking nature. The ideas Steinmetz outlined allowed engineers to calculate power losses in the magnetic components of electrical machinery during the design phase. Prior to this, the design process for transformers and electric motors was largely trial and error, and power losses could be measured only after the machine was built, which greatly added to the cost. Steinmetz was not just an equations and theory guy, though. He loved working in the lab and building things. In 1893, General Electric acquired the small manufacturing firm of Eickemeyer & Osterheld, in Yonkers, N.Y., where Steinmetz had worked since shortly after his arrival in the United States. So Steinmetz began his new life as a corporate engineer, an interesting turn for the socialist. During his first few years with GE, he mostly designed generators and transformers. But he also created an informal position for himself as a consultant, giving expert opinions on various problems across divisions. He eventually formalized this role, becoming GE’s chief consulting engineer, and he maintained a relationship with the company for the rest of his life, even after joining the faculty of Union College in 1902. By the time Steinmetz died in 1923 at the age of 58, he had been granted more than 200 patents and had made major contributions to various subfields in electrical engineering, including phasors and complex numbers (for steady-state AC analysis); electrical transients, switching surges, and surge protection (based on his research on lightning); industrial research (including how to run a corporate lab); and engineering methods (by writing textbooks that standardized practice). Why Steinmetz Believed in Electric Cars By 1914, Steinmetz was convinced that the future of transportation was electric. In June, he addressed the National Electric Light Association convention in Philadelphia with a bold prediction: “I have no doubt that in 10 years, more or less—rather less than more—we will see the field of the pleasure and business vehicle covered by such an electric car in large numbers. And I believe I underestimate when I say that 1,000,000 or more will be used.” As we now know, Steinmetz was overly optimistic. At the time, there were about 1.2 million gasoline-powered cars in use in the United States, and only about 35,000 EVs. It would take until 2018 for the number of EVs (including plug-in hybrids) on U.S. roads to surpass a million. Worldwide, there are now about 60 million electric vehicles in use. But Steinmetz had his reasons. He firmly believed that electric vehicles would flourish in urban areas, where most rides involved short distances at low speed. He also thought EVs would be a boon for power companies, which were eager to drum up more business, especially at night. With 1 million electric cars being charged about 5 kilowatt-hours on most nights, and at a rate of 5 cents per kilowatt-hour, Steinmetz predicted US $75 million (about $2.5 billion today) of new business for central power stations each year. In 1971, Union College purchased Steinmetz’s car, which had been found rotting in a field, and faculty and students restored it to working condition.Special Collections & Archives/Schaffer Library/Union College Steinmetz went to work to improve the electric car. He developed a double-rotor motor that was integrated into the rear axle, which did away with the need for a mechanical differential or drive shaft and drastically reduced the overall weight, which improved the mileage. Dey Electric Corp. incorporated Steinmetz’s design into its electric roadster and priced it under $1,000. Unfortunately, an internal combustion engine Ford Model T cost about half as much, and the Dey roadster flopped, ending production within a year. Undeterred, Steinmetz formed the Steinmetz Electric Motor Car Corp. in 1920 with the initial goal of bringing to market an electric truck for deliveries and light industrial use. The first truck debuted on a cold February day in 1922 with a publicity stunt of climbing the steep Miller Avenue hill in Brooklyn, N.Y. According to a report in The New York Times, the vehicle went up the 14.5 percent grade between Jamaica Avenue and Highland Boulevard in 51 seconds. During a second climb, it stopped a number of times to show how easily it restarted. The truck had a range of 84 km (52 miles). The company planned to manufacture 1,000 trucks per year and 300 lightweight delivery cars, plus a five-passenger coupe, but it made a total of only 48 vehicles. After Steinmetz died in 1923, the company soon ceased operation. Steinmetz wasn’t only bullish on the electric car, but on electricity in general. A New York Times article recorded his belief that by 2023, we would work no more than 4 hours a day, 200 days a year because electricity would have eliminated the drudgery and unpleasantness of labor. He also predicted that electricity would bring about an end to urban pollution: “Every city would be a spotless town.” With an expansion of leisure time, people would be healthier, engaging in gardening (especially growing their own food) and pursuing educational interests to become “much more intelligent and self-expressive creature[s].” Steinmetz’s Chosen Family I decided to write about Steinmetz last year, after IEEE Spectrum published an essay I wrote about why engineering needs the humanities. The article contains this line: “In 1909, none other than Charles Proteus Steinmetz advocated for including the classics in engineering education.” I had been impressed to learn of Steinmetz’s recognition of the value of a liberal arts education. But my copy editor didn’t know who Steinmetz was or why he merited the qualifier “none other.” More people should know about this remarkable man, I decided. And so I went looking for a museum object associated with him, so I could include him in a Past Forward column. Steinmetz [left] was easily the intellectual equal of Thomas Edison [right], whom he considered a friend.Corbis/Getty Images The electric car is only one avenue into Steinmetz’s life. I could instead have looked into Steinmetz solids (the geometric shapes that form when two or three identical cylinders intersect at right angles), Steinmetz curves (the edges of a Steinmetz solid), or the Steinmetz equivalent circuit (a mathematical model that describes a transformer using resistors and inductors). But none of those concepts could be easily captured in a picture-worthy object. His love of his electric car, on the other hand, was a fun and fitting entry point for this most unusual engineer. I also saw an opportunity to highlight how Steinmetz became a family man. Steinmetz had dwarfism—he stood just 122 centimeters tall—as well as kyphosis, a severe curvature of the spine, as did his father and grandfather. He didn’t wish to pass along those traits, and so he never married or had children of his own. But that didn’t mean he didn’t want a family. In 1903, Steinmetz’s favorite lab assistant, Joseph LeRoy Hayden, told his boss that he was getting married. Steinmetz invited the couple to dinner, and then invited them to live in his large home. They agreed to this unusual living arrangement, with Corinne Rost Hayden running the household and cooking for her husband and Steinmetz. She forced the men to set aside their work for regular family meals. Eventually, the Hayden family expanded, welcoming Joe, Midge, and Billy. Steinmetz legally adopted the elder Hayden, thereby gaining three grandchildren as well. Steinmetz, whom The New York Times had named a “modern Jove” who “hurls thunderbolts at will” (from a high-voltage lightning generator), delighted at entertaining the grandkids with wondrous tricks of electricity and chemistry. In writing about the history of electrical engineering, I sometimes fall into the trap of focusing too much on the technology. But it’s just as important to recognize the people behind the technology—their personalities, their frailties, their feelings, their challenges. Steinmetz faced adversity for his political beliefs, for being an immigrant, and for his physical stature, yet none of that ever stopped him. In word and deed, he showed that he had a generous heart as mighty as his intellect. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the March 2026 print issue as “Charles Proteus Steinmetz Loved His Electric Car.” References IEEE Power & Energy Magazine published Steinmetz’s pro/con list comparing electric cars to those with internal combustion engines in the September/October 2005 issue, along with a good biographical overview of Steinmetz by Carl Sulzberger. Union College published a nice story about the restoration of Steinmetz’s electric car in 2014, when it received its permanent home on campus. There are many biographies of Steinmetz, one published as early as 1924, but I am particularly fond of Steinmetz: Engineer and Socialist by Ronald Kline (Johns Hopkins University Press, 1992). Gilbert King’s 2011 article “Charles Proteus Steinmetz, the Wizard of Schenectady” for Smithsonian magazine describes Steinmetz’s chosen family and includes several fun anecdotes not mentioned above.
  • Video Friday: Robot Dogs Haul Produce From the Field
    Feb 27, 2026 10:00 AM PST
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. ICRA 2026: 1–5 June 2026, VIENNA Enjoy today’s videos! Our robots Lynx M20 help transport harvested crops in mountainous farmland—tackling the rural “last mile” logistics challenge. [ Deep Robotics ] Once again, I would point out that now that we are reaching peak humanoid robots doing humanoid things, we are inevitably about to see humanoid robots doing nonhumanoid things. [ Unitree ] In a study, a team of researchers from the Max Planck Institute for Intelligent Systems, the University of Michigan, and Cornell University show that groups of magnetic microrobots can generate fluidic forces strong enough to rotate objects in different directions without touching them. These microrobot swarms can turn gear systems, rotate objects much larger than the robots themselves, assemble structures on their own, and even pull in or push away many small objects. [ Science ] via [ Max Planck Institute ] Bipedal—or two-legged—autonomous robots can be quite agile. This makes them useful for performing tasks on uneven terrain, such as carrying equipment through outdoor environments or performing maintenance on an oceangoing ship. However, unstable or unpredictable conditions also increase the possibility of a robot wipeout. Until now, there’s been a significant lack of research into how a robot recovers when its direction shifts—for example, a robot losing balance when a truck makes a quick turn. The team aims to fix this research gap. [ Georgia Tech ] Robotics is about controlling energy, motion, and uncertainty in the real world. [ Carnegie Mellon University ] Delicious dinner cooked by our robot Robody. We’ve asked our investors to speak about why they’re along for the ride. [ Devanthro ] Tilt-rotor aerial robots enable omnidirectional maneuvering through thrust vectoring, but introduce significant control challenges due to the strong coupling between joint and rotor dynamics. This work investigates reinforcement learning for omnidirectional aerial motion control on overactuated tiltable quadrotors that prioritizes robustness and agility. [ Dragon Lab ] At the [Carnegie Mellon University] Robotic Innovation Center’s 75,000-gallon water tank, members of the TartanAUV student group worked to further develop their autonomous underwater vehicle (AUV) called Osprey. The team, which takes part in the annual RoboSub competition sponsored by the U.S. Office of Naval Research, is comprised primarily of undergraduate engineering and robotics students. [ Carnegie Mellon University ] Sure seems like the only person who would want a robot dog is a person who does not in fact want a dog. Compact size, industrial capability. Maximum torque of 90N·m, over 4 hours of no-load runtime, IP54 rainproof design. With a 15-kg payload, range exceeds 13 km. Open secondary development, empowering industry applications. [ Unitree ] If your robot video includes tasty baked goods it will be included in Video Friday. [ QB Robotics ] Astorino is a 6-axis educational robot created for practical and affordable teaching of robotics in schools and beyond. It has been created with 3D printing, so it allows for experimentation and the possible addition of parts. With its design and programming, it replicates the actions of industrial robots giving students the necessary skills for future work. [ Astorino by Kawasaki ] We need more autonomous driving datasets that accurately reflect how sucky driving can be a lot of the time. [ ASRL ] This Carnegie Mellon University Robotics Institute Seminar is by CMU’s own Victoria Webster-Wood, on “Robots as Models for Biology and Biology and Materials for Robots.” In the last century, it was common to envision robots as shining metal structures with rigid and halting motion. This imagery is in contrast to the fluid and organic motion of living organisms that inhabit our natural world. The adaptability, complex control, and advanced learning capabilities observed in animals are not yet fully understood, and therefore have not been fully captured by current robotic systems. Furthermore, many of the mechanical properties and control capabilities seen in animals have yet to be achieved in robotic platforms. In this talk, I will share an interdisciplinary research vision for robots as models for neuroscience and biology as materials for robots. [ CMU RI ]
  • Bond Strength, Biocompatibility, and Beyond
    Feb 27, 2026 03:00 AM PST
    Designing a medical device? This whitepaper helps you evaluate adhesive options for biocompatibility, sterilization resistance, and manufacturability — so you can make the right material decision early. What Attendees will Learn How to select between epoxy, silicone, cyanoacrylate, and UV/LED curable adhesives based on your device requirements Which adhesive systems meet USP Class VI and ISO 10993-5 biocompatibility standards How different sterilization methods, such as autoclaving, EtO, gamma, chemical immersion affect adhesive performance over repeated cycles Why integrating adhesive selection early in the design process reduces costly trade-offs between performance and manufacturability Download this free whitepaper now! Download this free whitepaper now!
  • This Startup Makes Access to Rehabilitation Facilities Easier
    Feb 26, 2026 11:00 AM PST
    When doctors in the United States refer patients to specialty or post-acute medical care such as physical therapy or long-term nursing care, nearly half never complete the process of finding help. Referrals stall in part because provider directories are outdated, insurance coverage is unclear, and much coordination still relies on phone calls and faxes. Carenector, a Denver-based startup launched in 2024, is working to improve the process with software that quickly connects patients with appropriate care providers while protecting their personal data. Instead of presenting a long list of providers, many of whom would not be a good match, the company’s referral platform uses AI to eliminate facilities that don’t meet the patient’s rehabilitation needs, don’t accept the patient’s insurance, or are not conveniently located. Carenector Cofounder: Naheem Noah Founded: 2024 Headquarters: Denver Employees: 5 The startup’s platform serves individuals seeking care as well as health care organizations and care coordination teams that manage patient referrals. The company aims to help patients while reducing the administrative burden on clinicians and discharge planners, says cofounder Naheem Noah. As of now, Carenector works with patients and facilities only in Colorado, but it plans to expand coverage nationwide. Noah, a Ph.D. candidate who joined IEEE in 2022 as a student member, encountered the referral problem firsthand after tearing an anterior cruciate ligament in a knee while playing soccer. Finding a physical therapist who accepted his insurance, specialized in ACL rehabilitation, had appointments available, and was near his home required hours of phone calls and searches through inaccurate provider lists, he says. That experience helped shape the company’s direction, but Carenector is aimed at a broader, persistent failure in U.S. health care coordination. A broken referral system The company took shape when Noah connected with his cofounder, licensed social worker Aminata Diarra, a social director at a nursing facility. Her role included discharge planning: placing patients in post-acute-care facilities that bridge the gap between hospital discharge and the patient’s ability to independently manage life’s daily activities. For a single patient, Diarra says, that often meant she made 10 to 15 phone calls over the course of a week to find a facility with a bed available, that accepted the patient’s insurance, and that could meet the care requirements. She and Noah soon realized they were dealing with the same broken system from opposite sides. Existing research on referral lapses supported their experience. Primary care physicians often send referral notes—analogous to prescriptions—that list the patient’s medical history and describe the needed treatment. Noah discovered that only about one-third of the notes are transmitted in a way that allows providers at nursing homes and rehab facilities to access the information. Physicians often post their suggestions for ongoing treatment in sections of a patient’s electronic health records, but providers at post-acute facilities don’t have access to those because of medical privacy laws. What gets shared is a pared-down document that omits progress notes and discharge summaries. Engineering a research-driven startup Noah is currently a researcher in the University of Denver computer science department, where his academic work focuses on privacy and security in digital systems. He is Carenector’s chief executive and technical lead, overseeing the system’s design, making technical decisions, and meeting with investors. Although the startup is separate from his dissertation research, the company reflects his broader interest in building secure systems that work in real-world conditions. Beginning a company while a student, he has access to university resources that many early-stage startups lack. He has participated in the university’s BaseCamp accelerator and received mentorship and business planning support. The Carenector team was assembled with the plan to scale up in the future with health care compliance in mind. The group includes professionals from regulatory, legal, and data engineering fields. Replacing phone calls with digital matching By using standardized digital information shared among medical facilities, Carenector eliminates the need for staff to make phone calls or send faxes. At the core of the platform is a structured database that links care providers—including post-acute, specialty, and rehabilitation facilities—with insurance plan criteria and facility attributes such as accessibility and service capabilities. One of the biggest challenges for Noah is getting accurate data on which services facilities offer, which insurance they accept, and whether a patient’s insurance plan covers the treatment proposed by the referring physician. “Health care information in the United States is not centralized,” he says, “and insurance provider directories are often wrong or out of date.” To address that, Carenector incorporates publicly available datasets from the U.S. Centers for Medicare & Medicaid Services (CMS), including plan attributes, service areas, quality ratings, and issuer-level transparency data. These public-use files provide plan-level and provider-level information that help standardize coverage criteria, geographic availability, and performance indicators. Carenector integrates this structured public data with facility-supplied information and referral outcome analytics to improve matching accuracy. “By replacing manual coordination with clear rules, accurate data, and built-in privacy protections, we hope to make accessing care a routine step in recovery—not another obstacle.” This structured data helps Carenector evaluate plan criteria, provider capabilities, geographic availability, and quality indicators to support referral decision-making. The company standardizes and organizes the information within its own system architecture and uses mapping and geolocation APIs to integrate location-based filtering and workflow functionality for patients, providers, and care coordinators. Because CMS data is updated periodically, Carenector supplements it with additional structured data sources and referral outcome analytics to better understand plan acceptance patterns. Room availability information comes directly from participating facilities, which are responsible for updating their status within Carenector’s system. Whether referrals succeed or fail provides critical feedback, Noah says. When referrals to specific facilities repeatedly go uncompleted—meaning the patient does not receive the recommended care from the provider—Carenector’s AI-driven matching algorithm adjusts to that pattern and reduces the likelihood of that facility being considered for similar cases. Facilities that consistently accept and complete referrals are ranked preferentially. Apps for patients and facilities The company has poured its data management wizardry and AI smarts into apps for patients and clinicians. The patient app helps users locate appropriate health care services at no cost. Users can search for care by service type, ZIP code, or insurance company without creating an account. They receive a list of matching facilities that can be shared via clipboard or sent by email to themselves or family members.. In the facility app, clinicians enter the diagnosis, rehabilitation needs, equipment requirements, insurance type, and location without sharing personally identifiable patient information. Organizations can communicate using secure messages that disappear after a set period. Files and images are shown only once and deleted after viewing. Facilities that use the app pay Carenector a flat fee for each successful referral. The patient app is free. The startup does not sell or share data with third parties, Noah says. “Privacy is a central design requirement for Carenector’s system, not a last-minute add-on to the finished product,” he says. The company minimizes the collection of personal data to avoid becoming a data repository. Although its role is limited to coordinating referrals, Carenector is working with independent security auditors to validate that its operational and data-handling practices align with Health Insurance Portability and Accountability Act (HIPAA) requirements. The HIPAA law sets standards meant to protect sensitive patient information from unauthorized disclosure. Noah says he is confident that Carenector will achieve that rating because the app is designed to reduce the collection and exposure of sensitive information wherever possible. Business model and measured expansion Carenector’s growth plan, Noah says, is strategic. Rather than scaling rapidly, he says, he is looking to enter one region at a time, incorporating feedback from each local deployment before expanding the company further. He envisions that in five years, Carenector will serve as a core piece of health care referral infrastructure—embedded in the workflows of hospitals, post-acute facilities, insurers, employers, and major electronic health record systems such as Epic and Cerner—while also increasing visibility for care facilities in underserved and remote areas. The plan, he says, is to support thousands of facility recommendations per day, compared with the approximately 200 daily facility recommendations it currently generates. Noah also looks forward to the broader adoption of APIs that allow care coordination and facility discovery to occur directly within clinical workflows. He says he sees his startup as a way to reduce unnecessary stress from moments when patients are vulnerable. “By replacing manual coordination with clear rules, accurate data, and built-in privacy protections,” he says, “we hope to make accessing care a routine step in recovery—not another obstacle.”
  • New Path to Battery-Grade Lithium Uses Electrochemistry
    Feb 26, 2026 09:00 AM PST
    As electric vehicles roll off assembly lines, a bottleneck sits upstream: lithium refinement. Turning raw lithium into the compounds needed for batteries is expensive, messy, and energy intensive, but Mangrove Lithium, a Vancouver-based startup, has a better way. The company has developed an electrochemical refining process that converts lithium feedstocks into battery-grade lithium hydroxide. Converting raw lithium to lithium hydroxide typically requires roasting spodumene—a mineral from which lithium is derived—at high temperatures, and then leaching it with acid to convert it to lithium sulfate. That compound then needs to be converted to lithium hydroxide. “It’s a thermochemical reaction that uses heavy amounts of reagent chemicals, and generates a sodium sulfate waste stream,” says Ryan Day, Mangrove Lithium’s director of operations. Further tightening the bottleneck, the majority of the world’s lithium—60 to 70 percent—is now refined in China, and export restrictions and geopolitical tensions have disrupted supply chains in recent years. Shipping raw lithium overseas to be refined also adds to batteries’ total carbon footprint. A new model for lithium refining could reshape not just the economics of electric vehicles but also the geography and environmental footprint of the global battery supply chain. Mangrove’s demo plant in British Columbia is scheduled to start production in the second half of 2026. How Does Mangrove’s Refinement Work? Mangrove replaces the conventional, resource-intensive reaction with a process that uses electricity, water, and oxygen. In an electrochemical cell, they flow brine through an electrolyzer, which consists of a metal box with three compartments between the cathode and anode. The compartments are separated by ion exchange membranes, semipermeable barriers that allow only certain ions to pass. Lithium sulfate flows through the central compartment, and the cell’s electric field splits the salt apart. “Lithium, which is a positive ion, will move across a membrane toward the cathode,” says Day. There, “we are reacting oxygen and water to create hydroxide ions, which join with the lithium from the salt to make lithium hydroxide.” Meanwhile, on the opposite side of the cell, the sulfate—a negative ion—moves toward the anode, where water is being split to produce protons and oxygen gas. The protons combine with sulfate ions to make sulfuric acid. “You run that process continuously, and over time you’re generating lithium hydroxide, which you can send to a crystallizer,” Day says. “There’s no significant waste product, and all you’re feeding in is brine, water, oxygen, and electricity.” The sulfuric acid is recovered and can be circulated back upstream to leach more brine from the raw feed material. In general, keeping the ion exchange membrane intact is one of the biggest challenges for scaling this type of process, says Feifei Shi, assistant professor of energy engineering at Penn State. Shi, who researches electrochemical-based refinement methods, notes that the approach can more easily activate the necessary reactions, but faces limitations for large-scale applications. The electrochemical process separates out lithium by passing it through three compartments separated by semipermeable barriers. Mangrove Lithium Mangrove’s Oxygen-Based Cathode Mangrove’s key innovation and what enables the process is an oxygen-based cathode. “Driving the reaction requires detailed engineering,” says Day. The company designed an electrode that lets a gas and a liquid react together, using just enough water to make the oxygen reaction work—without adding so much that it floods the system and creates hydrogen gas instead. The electrodes are made with a proprietary process that combines several dedicated layers that allow for a balanced flow of water and oxygen to access the active catalyst sites. This design favors the oxygen-reduction reaction for over 99.5 percent of the total cathode activity. It also reduces the amount of electricity needed to drive the process, because “oxygen reduction requires less voltage than water reduction,” Day says. Demand for battery minerals is surging beyond just lithium, with automakers competing for supplies of nickel, cobalt, graphite, and manganese. Simultaneously, utilities are deploying grid-scale batteries that use the same materials in even larger volumes. Refining capacity—not just mining—could become the critical choke point in this buildout, because battery makers require highly specified, ultrapure compounds. While Mangrove is initially targeting lithium, their electrochemical architecture is not inherently lithium-specific, and could be adapted to other battery materials that face similar purification bottlenecks. Nickel and cobalt sulfate production, for example, still rely on multistep precipitation and solvent-extraction processes that generate significant waste and require large reagent inputs. “It would work immediately in application to other alkali-metal salts,” Day says. Mangrove’s demo plant in British Columbia will make 1,000 tonnes per year of lithium hydroxide. If the company can scale its technology as it hopes, it could begin to reshape not just the battery supply chain but also the geopolitics of the energy transition.
  • From Headsets to Hearing Aids
    Feb 26, 2026 06:23 AM PST
    This is a sponsored article brought to you by Audio Precision. Bluetooth started as a simple wireless connection between a phone and a headset. Since its inception, it has become the invisible scaffolding for music, calls, gaming, and hearing assistance across consumer and professional devices alike. Bluetooth’s evolution to support more use cases has been driven not by a single breakthrough but by a steady accumulation of radio innovations, codecs, transport schemes, and power management strategies that together enhance the user experience with wireless audio. Today, a new architectural baseline—Bluetooth Low Energy (LE) Audio—promises low-power, high quality, and scalable audio delivery to open up the standard for an even wider range of applications [1][2]. Evolution of Bluetooth Radio Technologies The original Basic Rate (BR) radio introduced with Bluetooth 1.0 in 1999 used a Gaussian frequency-shift keying (GFSK) at 1 Msym/s, hopping through 79 channels in the 2.4 GHz band with alternating transmission directions in a tight time-division duplex rhythm. The short-range robustness and reliability afforded by this technology helped gain performance at par with traditional cable-based devices. In 2003, the Advanced Audio Distribution Profile (A2DP) arrived as the enabling standard for stereo audio streaming over Bluetooth Classic, marking the technology’s expansion beyond voice into music playback. A2DP uses the Audio/Video Distribution Transport Protocol (AVDTP) for stream management and mandates the Sub-Band Codec (SBC) as its baseline audio compression format. The SBC codec employs 4- or 8-band analysis/synthesis filter banks with adaptive bit allocation, spanning bitrates from 128 to 345 kbps for stereo content. Embedded DSP work showed how to optimize SBC implementation—Weighted Overlap Add (WOLA) filter banks, fixed-point pipelines, and real-time decoding that is audibly indistinguishable from floating point reference implementations while consuming fewer MIPS and milliwatts [3]. In 2004, Bluetooth 2.0 introduced Enhanced Data Rate (EDR) that moved payloads to π/4 DQPSK or 8 DPSK modulation to boost gross throughput to 2–3 Mb/s, while retaining the GFSK for packet headers. This innovation boosted stereo streaming quality and adoption during the decade. Around 2010, Bluetooth Low Energy (BLE) 1 M PHY technology was introduced via Bluetooth 4.0. This new radio technology continued to use GFSK but tuned for low duty cycles and intermittent bursts. This fundamental difference with BR/EDR (Basic Rate/Enhanced Data Rate) led to common usage of the term “Bluetooth Classic” for Bluetooth 1.0 to distinguish it from BLE. Isochronous Transport Architecture In late 2016, Bluetooth 5.0 introduced the LE 2M PHY, doubling the symbol rate to 2 Msym/s. For a healthy link margin, halving a packet’s airtime was found to reduce collision exposure and lower the energy delivered/bit. By 2020, Bluetooth 5.2 or Bluetooth LE Audio radically shifted the focus from continuous streaming to a transport designed explicitly around deadlines. LE (Low Energy) Audio leverages the existing LE 1M and LE 2M PHYs but carries audio over isochronous channels—slots with timing commitments. The isochronous channel architecture comes in two forms. Connected Isochronous Streams (CIS) are unicast flows whose parameters (intervals, subevents, retransmissions) can be tuned to meet frame deadlines with bounded jitter, enabling the radio to sleep predictably between bursts while the application knows precisely when a frame will arrive. A systematic review of BLE performance corroborates that output and latency in the real world are bounded as much by connection interval, event length, and retransmissions as by the raw symbol rate; under the right parameters, faster PHYs reduce radioactive time and improve energy efficiency, while coded long-range modes trade airtime for robustness in harsher channels [1]. Broadcast Isochronous Streams (BIS)—commercially branded as Auracast—extend that scheduling to one-to-many transmissions, enabling connectionless audio delivery to unlimited receivers [2][7]. This difference in architecture over continuous streams requires careful selection of intervals, packetization, codec forming and appropriate models to determine parameters that meet deadlines without wasting airtime. Markov chain analyses of CIS—validated via simulation—translate developer choices (intervals, subevents, retransmission counts) into quantitative predictions for packet loss rate (PLR), backlog, delay, throughput, and average power consumption. [7] The LC3 Codec Advantage LE Audio’s Low Complexity Communication Codec (LC3) fundamentally shifts the bitrate-quality-complexity balance. Peer-reviewed listening tests across speech and music demonstrate that LC3 delivers superior perceived quality compared with SBC and mSBC at roughly half the bitrate; it also provides robust packet loss concealment and flexible frame sizes, including low-latency modes that make the encoding delay a smaller slice of the end‑to-end budget [2]. The benefits are practical: lower bitrate shrinks airtime, which reduces collision risk; shorter frames pair cleanly with CIS scheduling so deadlines are easier to meet; the codec’s computational footprint is modest enough for miniature devices [2]. Audio Precision provides high-performance audio analyzers, accessories, and applications that have helped engineers worldwide design, validate, characterize, and manufacture audio products for over 40 years. Hearing Aids: Power-Constrained Wireless Audio Modern hearing devices are a complex assembly of multiple microphones, digital signal processors, and miniature power sources. Except for Completely-in-Canal (CIC) and Invisible-in-Canal (IIC) designs, which are so small they fit entirely within the ear canal, most hearing aids incorporate two or more microphones to support directional processing, beamforming, and noise reduction. Audio output is provided by a single electro-acoustic transducer. The compact form factor severely limits battery capacity, making energy efficiency critical. Compared to Bluetooth Classic (A2DP/HFP), LE Audio improves energy efficiency through three broad mechanisms: the LC3 codec achieves equivalent perceived audio quality at significantly lower bitrates than the SBC codec used in Bluetooth Classic; the LE 1M and 2M PHYs reduce on-air time per packet relative to BR/EDR; and Connected Isochronous Streams (CIS) enable precise scheduling, allowing the radio to sleep between transmissions, whereas BR/EDR audio requires longer active radio periods. BLE‑compliant wake‑up receivers (WuRx) monitor the air with micro/nano-watt sensitivity and trigger the main radio with packet preambles. Reported designs demonstrate sensitivity to extremely weak radio signals (down to −80 dBm), with within‑bit duty cycling that trades latency for power from hundreds of microseconds to seconds [4]. Sleep scheduling techniques primarily apply heuristics for periodic check‑ins, event‑driven wake-ups, clustering, and time division to stretch lifetime while meeting QoS targets [5][6]. From True Wireless Stereo to Coordinated Sets Bluetooth Classic’s A2DP supports only a single audio stream. In Bluetooth Classic’s True Wireless Stereo (TWS) devices, one earbud acts as the primary, receiving the stereo stream from the phone and relaying audio to the secondary earbud—a forwarding or relay architecture. The additional transmission hop adds latency to the secondary earbud, while increasing power consumption in the primary. LE Audio eliminates this limitation entirely. The technology’s dual CIS capability lets the phone send synchronized left and right streams directly to both earbuds. This architectural shift enables independent CIS connections from the phone to the left and right earbuds or hearing aids, enabling synchronized stereo delivery without relaying. Discovery and pairing have evolved to match multi‑device use. The Coordinated Set Identification Service (CSIS) allows two earbuds—or two hearing aids—to be discovered and managed as a coordinated set rather than independently, with resolvable identifiers and set‑level locks. While peer‑reviewed empirical literature on CSIS is thin, timing and carrier synchronization theory is mature: clock‑offset estimation, jitter control, phase‑locked loops, buffer alignment, and recovery strategies hold binaural timing within tens of milliseconds for lip‑sync and spatial imaging [9]. Gaming Headsets: Low Latency With Bidirectional Stereo Gaming represents a demanding stress test for wireless audio. Bluetooth Classic’s Headset Profile (HSP) and Hands-Free Profile (HFP) support bidirectional audio for voice communication but are fundamentally limited: they transmit only in mono with a maximum sampling rate of 16 kHz, restricting both spatial audio quality and voice fidelity. LE Audio Unicast Voice transforms this scenario by supporting stereo audio with sampling rates up to 32 kHz, significantly improving spatial audio and speech quality for gaming while maintaining voice communication with other players. End‑to‑end latency often must stay under a few tens of milliseconds for responsive play and coherent spatial sound. LC3’s shorter frames and lower bitrates shrink codec delay; tuned CIS parameters preserve deadlines while limiting retransmissions to useful values; beamforming improves capture quality for bidirectional voice without ballooning computational cost [2][7]. Audio Precision’s new Bluetooth® 5 module provides an interface to audio devices using the latest version of the Bluetooth specification, including LE Audio devices utilizing Unicast and Auracast™. Adobe Stock Public Broadcast Audio: Auracast Bluetooth Classic supports only one active audio connection and typically provides a range of approximately 10 meters, making it fundamentally unsuitable for broadcast scenarios such as lecture halls, churches, gyms, and airports. LE Audio introduces the Broadcast Isochronous Stream (BIS), commercially branded as Auracast, enabling true one-to-many audio transmission. Multiple hearing aids, headphones, and earbuds can receive the same broadcast, which may be public (e.g., airport announcements) or private (encrypted, non-discoverable, optional password protection). Typical Auracast ranges extend up to 30 meters indoors and 100 meters outdoors, depending on environment and configuration. BIS’s connectionless nature scales easily to unlimited receivers without pairing overhead; isochronous delivery tolerates packet loss well through forward error correction and interleaving; and the unidirectional transmission eliminates return traffic, reducing radio congestion. Assistive listening studies report that bypassing room acoustics and delivering audio directly can improve signal‑to‑noise ratios by 15–20 dB, making announcements comprehensible and lectures clearer [8]. Ensuring It Sounds Good in, on or Over the Listener’s Ear LE Audio delivers the music or voice signal more efficiently than its predecessor, Bluetooth Classic. Audio engineers still need to verify their devices’ audio performance as experienced by the end user. The listener’s pinna, the external part of the ear, and ear canal are a critical part of the playback system. For example, the low-frequency response and the effectiveness of active noise-cancellation are highly dependent on the seal between the device and the listener’s ear canal. Similarly, on-ear and over-ear headphones interact with the listener’s pinnas. Anthropomorphic test fixtures—most notably GRAS KEMAR (Knowles Electronics Manikin for Acoustic Research) head and torso simulators—incorporate soft, deformable anthropomorphic pinnas that replicate realistic insertion and sealing conditions. These allow accurate replication of insertion depth, sealing, low-frequency response, and ANC performance [10][12]. Gaming headsets both receive and send audio. Just like music headphones, gaming headset testing benefits from fixtures with a human-like pinna to ensure repeatable measurement of ear-pad interaction. The headset’s microphone can be either a traditional boom microphone positioned close to the mouth or an array of microphones located farther away on the ear cups incorporating beamforming to isolate the wearer’s voice from any background noise. Test fixtures use an artificial mouth and a microphone positioned at the Mouth Reference Point (MRP) according to ITU-T standards to evaluate microphone performance under realistic speech and background noise conditions [10]. For testing of devices intended as broadcast receivers, an integrated test system with Auracast broadcast capability—like the Audio Precision Bluetooth 5 module—proves invaluable. Conclusion Bluetooth audio is no longer defined by a single radio or a single profile. It is defined by a timed pipeline—a codec that makes better sound with fewer bits, a transport that guarantees when those bits arrive, a radio that can sleep most of the time, and front‑end processing that gives the codec an easier job. Hearing aids illustrate the payoff: arrays and beamformers improve intelligibility first; LC3 compresses with low delay; CIS schedules delivery; the radio sleeps; batteries last. Enhancements in other applications, such as gaming and public broadcast, further strengthen the case for adoption of this cutting-edge technology. While Bluetooth audio began as a low-bandwidth, mono voice technology over Basic Rate (BR) radio in 1999, more than 25 years of evolution has produced a fundamental architectural shift. LE Audio replaces continuous point-to-point streams with scheduled, low-power, scalable audio delivery, enabling new classes of devices and use cases. The standards are ready, and audio test systems like Audio Precision’s Bluetooth 5 module are updated to incorporate the new transmission technology; the rest is execution—deploying LE Audio broadly so audio becomes instant, clear, and inclusive [2][7]. References [1] Tosi, J., Taffoni, F., Santacatterina, M., Sannino, R., & Formica, D. (2017). Performance evaluation of Bluetooth Low Energy: A systematic review. Sensors, 17(12), Article 2898. https://doi.org/10.3390/s17122898 [2] Schnell, M., Riedl, M., Löllmann, H., & Multrus, M. (2021). LC3 and LC3plus: The new audio transmission standards for wireless communication. Proceedings of the AES 150th Convention, Online. [3] Hermann, D., Herre, J., & Teichmann, R. (2004). Low-power implementation of the Bluetooth subband audio codec. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Montreal, QC, Canada. [4] Abdelhamid, M. R., Chen, R., Cho, J., Chandrakasan, A. P., & Wentzloff, D. D. (2018). A −80 dBm BLE-compliant, FSK wake-up receiver with system and within-bit duty-cycling for scalable power and latency. Proceedings of the IEEE Custom Integrated Circuits Conference (CICC), San Diego, CA, USA. [5] Mutar, M. S., Mohammed, A. H., & Abdulkareem, M. B. (2024). A survey of sleep scheduling techniques in wireless sensor networks for maximizing energy efficiency. AIP Conference Proceedings. [6] Mikhaylov, K., & Karvonen, H. (2020). Wake-up radio enabled BLE wearables: Empirical and analytical evaluation of energy efficiency. Proceedings of the IEEE International Symposium on Medical Information and Communication Technology (ISMICT). [7] Yan, Z., Xu, H., & Shen, Z. (2024). Modeling and analysis of the performance for CIS-based Bluetooth LE Audio [Preprint]. [8] Kaufmann, T. B., Weller, T., Stiefelhagen, R., & Adiloglu, K. (2023). Requirements for mass adoption of assistive listening technology by the general public. arXiv. https://arxiv.org/abs/2303.02523 [9] Nasir, A. A., Durrani, S., Mehrpouyan, H., Blostein, S. D., & Kennedy, R. A. (2015). Timing and carrier synchronization in wireless communication systems: A survey and classification of research in the last five years. arXiv. https://arxiv.org/abs/1507.02032 [10] Okorn, E., & Wulf-Andersen, P. (2019). Acoustic test fixtures: From KEMAR and beyond! The Journal of the Acoustical Society of America, 146(4), 2815. https://doi.org/10.1121/1.5136656 [11] An analytical model of Bluetooth performance considering physical and link-layer effects. (2021). IEEE Xplore. [12] IEC/ITU acoustic standards literature for headphone and earbud testing. (n.d.). Indexed in The Journal of the Acoustical Society of America and AIP Conference Proceedings. Disclosure: AI tools were used by Wiley, which produced this sponsored article, to skim through research literature for technical insights on the evolution and state of the art of Bluetooth technology. AI was also used to polish the text for conciseness and technical accuracy.
  • How Stupid Would It Be to Put Data Centers in Space?
    Feb 26, 2026 06:00 AM PST
    What’s the difference between a stupid idea and a brilliant one? Sometimes, it just comes down to resources. Practically unlimited funds, like limitless thrust, can get even a mad idea off the ground. And so it might be for the concept of putting AI data centers in orbit. In a rare moment of unalloyed agreement, some of the richest and most powerful men in technology are staunchly backing the idea. The group includes Elon Musk, Jeff Bezos, Jensen Huang, Sam Altman, and Google CEO Sundar Pichai. In all likelihood, hundreds of people are now working on the concept of space data centers at the firms directly or indirectly controlled by these men—SpaceX, Starlink, Tesla, Amazon, Blue Origin, Nvidia, OpenAI, and Google, among others. Likely costs to design, build, and launch a 1-GW orbital data center, based on a network of some 4,300 satellites and including operating costs over a five-year period, would exceed US $50 billion. That’s about three times the cost of a 1-GW data center on Earth, including five years of operation.John MacNeill So how much would it cost to start training large language models in space? Probably the best accounting is one created by aerospace engineer Andrew McCalip. McCalip’s exhaustive, detailed analysis includes interactive sliders that let you compare costs for space-based and terrestrial data centers in the range of 1 to 100 gigawatts. One-gigawatt data centers are being built now on terra firma, and Meta has announced plans for a 5-GW facility, with anticipated completion some time after 2030. In an interview, McCalip says his initial rough calculations a few years ago suggested that data centers in space would cost in the range of 7 to 10 times more, per gigawatt of capacity, than their terrestrial counterparts. “It just wasn’t practical,” he says. “Not even close.” But when Elon Musk began publicly backing the idea, McCalip revisited the numbers using publicly available information about Starlink’s and Tesla’s technologies and capabilities. That changed the picture substantially. The figures in his online analysis assume an orbital network of data-center satellites that borrows heavily from Musk’s tech treasure chest—“essentially…you just start putting some radiation-resistant ASIC chips on the Starlink fleet and you start growing edge capacity organically on the Starlink fleet,” McCalip says. The network would rely on the kind of watt-efficient GPU architecture used in Teslas for self-driving, he adds. “You start dropping those onto the backs of Starlinks. You can slowly grow this out, and this would be approximately the performance that you would get.” Bottom line, with some solid but not necessarily heroic engineering, the cost of an orbital data center could be as low as three times that of the comparable terrestrial one. That differential, while still high, at least nudges the concept out of the instantly dismissible category. “I have my particular views, but I want the data to speak for itself,” McCalip says. For this illustration, we picked a configuration with an aggregate 1 GW of capacity. The network would consist of some 4,300 satellites, each of which would be outfitted with a 1,024-square-meter solar array that generates 250 kilowatts. The data center on that satellite, powered by the array, might have at least 175 GPUs; McCalip notes that a popular GPU rack, Nvidia’s NVL72, has 72 GPUs and requires 120 to 140 kW. The total cost of the satellite network would be around US $51 billion, including launch and five years of operational expenses; a comparable terrestrial system would cost about $16 billion over the same period. Stupid? Not stupid? You decide.
  • Achieving Micron-Level Tolerances: CAD Optimization for Sub-10µm 3D Printing
    Feb 26, 2026 03:00 AM PST
    Achieve successful micro-scale 3D prints by optimizing tolerances, wall thickness, support strategies, microfluidic channels, and material selection in your CAD models from the start. What Attendees will Learn Tolerance-driven design -- How to define resolution and tolerance constraints that translate directly from CAD intent to sub-10µm printed geometry. Geometry-aware fabrication -- Principles for engineering wall thickness, aspect ratios, and orientation to maintain structural fidelity at micron scale. Support-free design strategies -- Leveraging self-supporting geometries and build orientation to preserve feature integrity without post-processing trade-offs. Integrated material-process thinking -- Matching resin properties, shrinkage behavior, and export parameters to your application’s functional requirements. Download this free whitepaper now!
  • How to Thrive as a Remote Worker
    Feb 25, 2026 09:48 AM PST
    This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free! Standing Out as a Remote Worker Takes a Different Strategy My first experience as a remote worker was a disaster. Before I joined a San Francisco-based team with a lead developer in Connecticut, I had worked in person, five days a week. I thought success was simple: write good code, solve hard problems, deliver results. So I put my head down and worked harder than ever. Twelve-hour days became normal as the boundary between work and personal life disappeared. My kitchen table became my office. I rarely asked for help because I didn’t want to seem incompetent. I stayed quiet in team Slack channels because I wasn’t sure what to say. Despite working some of the longest hours of my career, I made the slowest progress. I felt disconnected from the team. I had no idea if my work mattered or if anyone noticed what I was doing. I was burning out. Eventually, I realized the real problem: I was invisible. The Office Advantage You Lose When Remote In an office, visibility happens naturally. Colleagues see you arrive early or stay late. They notice when you are stuck on a problem. They hear about your work in hallway conversations and over lunch. Physical presence creates recognition with almost no effort. Remote work removes those signals. Your manager cannot see you at your desk. Your teammates don’t know you’ve hit a roadblock unless you say so. You can work long days and still appear less engaged than someone in the office. That is the shift many people miss: Remote work requires execution plus deliberate communication. What Actually Works By my second remote role, I knew I had to change to protect my sanity and still succeed. Here are five things I did that made a real difference. 1. Over-communicating I began sharing updates in team channels regularly, not just when asked. “Working on the payment integration today; ready for review tomorrow.” “Hit a blocker with API rate limits; investigating options.” These took seconds but made my work visible and invited help sooner. 2. Setting limits When your home is also your office, overwork becomes the default. I started ending most days at 5 p.m. and transitioning out of work mode with a walk or gym session. That ritual helped prevent burnout. 3. Volunteering for presentations Presenting remotely felt less intimidating than standing in front of a room. I started volunteering for demos and lunch-and-learns. This increased my visibility beyond my immediate team and improved my communication skills. 4. Promoting others publicly When someone helped me, I thanked them in a public channel. When a teammate shipped something impressive, I called it out. This builds goodwill and signals collaboration. In remote environments, gratitude is visible and memorable. 5. Building relationships deliberately In an office, relationships form naturally. Remotely, you have to create those moments. I started an engineering book club that met every other week to discuss a technical book. It became a low-pressure way to connect with people across the organization. The Counterintuitive Reality With these habits, I got promoted faster in this remote job than I ever did in an office. I moved from senior engineer to engineering manager in under two years, while maintaining a better work-life balance. Remote work offers flexibility and freedom, but it comes with a tax. You are easier to overlook and more likely to burn out unless you are intentional in your actions. So, succeeding remotely takes deliberate effort in communication, relationships, and boundaries. If you do that well, remote work can unlock more opportunities than you might expect. —Brian This Former Physicist Helps Keep the Internet Secure Despite its critical role in maintaining a secure network, authentication software often goes unnoticed by users. Alan DeKok now runs one of the most widely used remote authentication servers in the world—but he didn’t initially set out to work in cybersecurity. DeKok studied nuclear physics before starting the side project that eventually turned into a three-decade-long career. Read more here. More Than 30,000 Tech Employees Laid Off in 2026 We’re just two months into 2026, and layoffs in the tech industry are already ramping up. According to data compiled by RationalFX, more than half of the 30,700 layoffs this year have come from Amazon, which announced that it would be cutting the roles of 16,000 employees in late January. Will the trend continue through 2026? Read more here. IEEE Online Mini-MBA Aims to Fill Leadership Skills Gaps in AI Recent research suggests that a majority of organizations have a significant gap when it comes to AI skills among leadership. To help fill the gap, IEEE has partnered with the Rutgers Business School to offer an online “mini-MBA” program, combining business strategy and deep AI literacy. The program spans 12 weeks and 10 modules that teach students how to implement AI strategies in their own organizations. Read more here.
  • AI Is Acing Math Exams Faster Than Scientists Write Them
    Feb 25, 2026 08:00 AM PST
    Mathematics is often regarded as the ideal domain for measuring AI progress effectively. Math’s step-by-step logic is easy to track, and its definitive, automatically verifiable answers remove any human or subjective factors. But AI systems are improving at such a pace that math benchmarks are struggling to keep up. Way back in November 2024, nonprofit research organization Epoch AI quietly released FrontierMath. A standardized, rigorous benchmark, FrontierMath was designed to measure the mathematical reasoning capabilities of the latest AI tools. “It’s a bunch of really hard math problems,” explains Greg Burnham, Epoch AI senior researcher. “Originally, it was 300 problems that we now call tiers 1–3, but having seen AI capabilities really speed up, there was a feeling that we had to run to stay ahead, so now there’s a special challenge set of extra carefully constructed problems that we call tier 4.” To a rough approximation, tiers 1–4 go from advanced undergraduate through to early postdoc-level mathematics. When introduced, state-of-the-art AI models were unable to solve more than 2 percent of the problems FrontierMath contained. Fast forward to today: The best publicly available AI models, such as GPT-5.2 and Claude Opus 4.6, are solving over 40 percent of FrontierMath’s 300 tier 1–3 problems, and over 30 percent of the 50 tier 4 problems. AI takes on Ph.D.-level mathematics And this dizzying pace of advancement is showing no signs of abating. For example, just recently Google DeepMind announced that Aletheia, an experimental AI system derived from Gemini Deep Think, achieved publishable Ph.D.-level research results. Though obscure mathematically—it was calculated with certain structure constants in arithmetic geometry called eigenweights—the result is significant in terms of AI development. “They’re claiming it was essentially autonomous, meaning a human wasn’t guiding the work, and it’s publishable,” Burnham says. “It’s definitely at the lower end of the spectrum of work that would get a mathematician excited, but it’s new—it’s something we truly haven’t really seen before.” To place this achievement in context, every FrontierMath problem has a known answer that a human has derived. Though a human could probably have achieved Aletheia’s result “if they sat down and steeled themselves for a week,” says Burnham, no human had ever done so. Aletheia’s results and other recent achievements by AI mathematicians point to new, tougher benchmarks being needed to understand AI capabilities—and fast, because existing ones will soon become irrelevant. “There are easier math benchmarks that are already obsolete, several generations of them,” says Burnham. “FrontierMath will probably saturate [Ed. note: This means that state-of-the-art AI models score 100 percent] within the next two years—could be faster.” The First Proof challenge To begin to address this problem, on 6 February, a group of 11 highly distinguished mathematicians proposed the First Proof challenge, a set of 10 extremely difficult math questions that arose naturally in the authors’ research processes, and whose proofs are roughly five pages or less and had not been shared with anyone. The First Proof challenge was a preliminary effort to assess the capabilities of AI systems in solving research-level math questions on their own. Generating serious buzz in the math community, professional and amateur mathematicians, and teams including OpenAI, all stepped up to the challenge. But by the time the authors posted the proofs on 14 February, no one had submitted correct solutions to all 10 problems. In fact, far from it. The authors themselves only solved two of the 10 problems using Gemini 3.0 Deep Think and ChatGPT 5.2 Pro. And most outside submissions fared little better, apart from OpenAI and a small Aletheia team at Google DeepMind. With “limited human supervision,” OpenAI’s most advanced internal AI system solved five of the 10 problems, with Aletheia achieving similar outcomes—results met with a spectrum of emotions by different members of the mathematics community, from awe to disappointment. The team behind First Proof plans an even tougher second round on 14 March. A new frontier for AI “I think First Proof is terrific: It’s as close as you could realistically get to putting an AI system in the shoes of a mathematician,” says Burnham. Though he admires how First Proof tests AI’s mathematical utility for a wide range of mathematics and mathematicians, Epoch AI has its own new approach to testing—FrontierMath: Open Problems. Uniquely, the pilot benchmark consists of 16 open problems (with more to follow) from research mathematics that professional mathematicians have tried and failed to solve. Since Open Problems’ release on 27 January, none have been solved by an AI. “With Open Problems, we’ve tried to make it more challenging,” says Burnham. “The baseline on its own would be publishable, at least in a specialty journal.” What’s more, each question is designed so that it can be automatically graded. “This is a bit counterintuitive,” Burnham adds. “No one knows the answers, but we have a computer program that will be able to judge whether the answer is right or not.” Burnham sees First Proof and Open Problems as being complementary. “I would say understanding AI capabilities is a more-the-merrier situation,” he adds. “AI has gotten to the point where it’s—in some ways—better than most Ph.D. students, so we need to pose problems where the answer would be at least moderately interesting to some human mathematicians, not because AI was doing it but because it’s mathematics that human mathematicians care about.”
  • Jimi Hendrix Was a Systems Engineer
    Feb 25, 2026 07:39 AM PST
    3 February 1967 is a day that belongs in the annals of music history. It’s the day that Jimi Hendrix entered London’s Olympic Studios to record a song using a new component. The song was “Purple Haze,” and the component was the Octavia guitar pedal, created for Hendrix by sound engineer Roger Mayer. The pedal was a key element of a complex chain of analog elements responsible for the final sound, including the acoustics of the studio room itself. When they sent the tapes for remastering in the United States, the sounds on it were so novel that they included an accompanying note explaining that the distortion at the end was not malfunction but intention. A few months later, Hendrix would deliver his legendary electric guitar performance at the Monterey International Pop Festival. “Purple Haze” firmly established that an electric guitar can be used not just as a stringed instrument with built-in pickups for convenient sound amplification, but also as a full-blown wave synthesizer whose output can be manipulated at will. Modern guitarists can reproduce Hendrix’s chain using separate plug-ins in digital audio workstation software, but the magic often disappears when everything is buffered and quantized. I wanted to find out if a more systematic approach could do a better job and provide insights into how Hendrix created his groundbreaking sound. My fascination with Hendrix’s Olympic Studios’ performance arose because there is a “Hendrix was an alien” narrative surrounding his musical innovation—that his music appeared more or less out of nowhere. I wanted to replace that narrative with an engineering-driven account that’s inspectable and reproducible—plots, models, and a signal chain from the guitar through the pedals that you can probe stage by stage. Each effects pedal in Hendrix’s chain contributed to enhancing the electric guitar beyond its intrinsic limits. A selection of plots from the full-circuit analysis shows how the Fuzz Face turns a sinusoid signal from a string into an almost square wave; how the Octavia pedal inverts half the input waveform to double its frequency; how the wah-wah pedal acts as band-pass filter; and how the Uni-Vibe pedal introduces selective phase shifts to color the sound.James Provost/ Rohan S. Puranik Although I work mostly in the digital domain as an edge-computing architect in my day job, I knew that analog circuit simulations would be the key to going deeper. My first step was to look at the challenges Hendrix was trying to address. Before the 1930s, guitars were too quiet for large ensembles. Electromagnetic pickups—coils of wire wrapped around magnets that detect the vibrations of metal strings—fixed the loudness problem. But they left a new one: the envelope, which specifies how the amplitude of a note varies as it’s played on an instrument, starting with a rising initial attack, followed by a falling decay, and then any sustain of the note after that. Electric guitars attack hard, decay fast, and don’t sustain like bowed strings or organs. Early manufacturers tried to modify the electric guitar’s characteristics by using hollow bodies fitted with magnetic pickups, but the instrument still barked more than it sang. Hendrix’s mission was to reshape both the electric guitar’s envelope and its tone until it could feel like a human voice. He tackled the guitar’s constraints by augmenting it. His solution was essentially a modular analog signal chain driven not by knobs but by hands, feet, gain staging, and physical movement in a feedback field. Hendrix’s setups are well documented: Set lists, studio logs, and interviews with Mayer and Eddie Kramer, then the lead engineer at Olympic Studios, fill in the details. The signal chain for “Purple Haze” consisted of a set of pedals—a Fuzz Face, the Octavia, and a wah-wah—plus a Marshall 100-watt amplifier stack, with the guitar and room acoustics closing a feedback loop that Hendrix tuned with his own body. Later, Hendrix would also incorporate a Uni-Vibe pedal for many of his tracks. All the pedals were commercial models except for the Octavia, which Mayer built to produce a distorted signal an octave higher than its input. Hendrix didn’t speak in decibels and ohm values, but he collaborated with engineers who did. I obtained the schematics for each of these elements and their accepted parameter ranges, and converted them into netlists that ngspice can process (ngpsice is an open source implementation of the Spice circuit analyzer). The Fuzz Face pedal came in two variants, using germanium or silicon transistors, so I created models for both. In my models, Hendrix’s guitar pickups had a resistance of 6 kiloohms and an inductance of 2.5 henrys with a realistic cable capacitance. I chained the circuit simulations together using a script, and I produced data-plot and sample sound outputs with Python scripts. All of the ngspice files and other scripts are available in my GitHub repository at github.com/nahorov/Hendrix-Systems-Lab, with instructions on how to reproduce my simulations. What Does The Analysis of Hendrix’s Signal Chain Tell Us? Plotting the signal at different points in the chain with different parameters reveals how Hendrix configured and manipulated the nonlinear complexities of the system as a whole to reach his expressive goals. A few highlights: First, the Fuzz Face is a two-transistor feedback amplifier that turns a gentle sinusoid signal into an almost binary “fuzzy” output. The interesting behavior emerges when the guitar’s volume is reduced. Because the pedal’s input impedance is very low (about 20 kΩ), the pickups interact directly with the pedal circuit. Reducing amplitude restores a sinusoidal shape—producing the famous “cleanup effect” that was a hallmark of Hendrix’s sound, where the fuzz drops in and out as desired while he played. Engineer Eddie Kramer, Jimi Hendrix, and studio manager Jim Marron at the Electric Lady Studios in New York City.Fred W. McDarrah/Getty Images Second, the Octavio pedal used a rectifier, which normally converts alternating to direct current. Mayer realized that a rectifier effectively flips each trough of a waveform into a peak, doubling the number of peaks per second. The result is an apparent doubling of frequency—a bloom of second-harmonic content that the ear hears a bright octave above the fundamental. Third, the wah-wah pedal is a band-pass filter: Frequency plots show the center frequency sweeping from roughly 300 hertz to 2 kilohertz. Hendrix used it to make the guitar “talk” with vowel sounds, most iconically on “Voodoo Child (Slight Return).” Fourth, the Uni-Vibe cascades four phase-shift sections controlled by photoresistors. In circuit terms, it’s a low-frequency oscillator modulating a variable-phase network; in musical terms it’s motion and air. Finally, the whole chain became a closed loop by driving the Marshall amplifier near saturation, which among other things extends the sustain. In a reflective room, the guitar strings couple acoustically to the speakers—move a few centimeters and you shift from one stable feedback mode to another. To an engineer, this is a gain-controlled acoustic feedback system. To Hendrix, it was part of the instrument. He learned to tune oscillation with distance and angle, shaping sirens, bombs, and harmonics by walking the edge of instability. Hendrix didn’t speak in decibels and ohm values, but he collaborated with engineers who did—Mayer and Kramer—and iterated fast as a systems engineer. Reframing Hendrix as an engineer doesn’t diminish the art. It explains how one person, in under four years as a bandleader, could pull the electric guitar toward its full potential by systematically augmenting the instrument’s shortcomings for maximum expression. This article appears in the March 2026 print issue as “Jimi Hendrix, Systems Engineer.” A correction to this article was made on 27 Feb 2026 to correctly identify the men posing with Jimi Hendrix in the recording studio.
  • This Physics Professor Credits Collaboration for Her Success
    Feb 24, 2026 11:00 AM PST
    For Cinzia DaVià, collaboration isn’t just a buzzword. It’s the approach she applies to all her professional endeavors. From her contributions to the development of a silicon sensor used in CERN (European Organization for Nuclear Research) particle accelerator experiments to her current research on portable energy generation solutions, there’s a common thread. Cinzia DaVià Employers University of Manchester, England; Stony Brook University, in New York Job titles Professor of physics; research professor Member grade Senior member Alma maters University of Bologna, Italy; University of Glasgow As a professor of physics at the University of Manchester, in England, and a research professor at Stony Brook University, in New York, she has built strong connections across academic disciplines. Her continued involvement at CERN connects her with a broad array of professionals. DaVià, an IEEE senior member, says she leverages her expertise and her network of collaborators to solve problems and build solutions. Her efforts include advancing high-energy particle experiments, improving cancer treatments and quantum imaging, and mitigating the effects of climate change. Collaboration is the foundation for any project’s success, she says. She credits IEEE for making many of her professional connections possible. Even though she is the driving force behind building her alliances, she prefers to shine the spotlight on others, she says. For her, focusing on teamwork is more important than identifying individual contributions. “The people involved in any project are really the ones to be celebrated,” she says. “The focus should be on them, not me.” A career influenced by Italian television As a young child growing up in the Italian Dolomites, her passion for physics was sparked by a popular documentary series, “Astronomia,” an Italian version of Carl Sagan’s renowned “Cosmos” series. The show was DaVià’s introduction to the world of astrophysics. She enrolled at Italy’s Alma Mater Studiorum/University of Bologna, confident she would pursue a degree in astronomy and astrophysics. A summer internship at CERN in Geneva changed her career trajectory. She helped construct experiments for the Large Electron-Positron collider there. The LEP remains the largest electron-positron accelerator ever. An underground tunnel wide enough to accommodate the LEP’s 27-kilometer circumference was built on the CERN campus. It was Europe’s biggest civil engineering project at the time. The LEP was designed to validate the standard model of physics, which until then was a theoretical framework that attempted to explain the universe’s building blocks. The experiments—which performed precision measurements of W and Z bosons, the positive and neutral bits central to particle physics—confirmed the standard model. The LEP also paved the way, figuratively and literally, for CERN’s Large Hadron Collider. Following the LEP’s decommissioning in 2000, it was dismantled to make way for the LHC in the same underground testing tunnel. As DaVià’s summer internship work on LEP experiments progressed, her professional focus shifted. Her plans to work in astrophysics gradually transitioned to a focus on radiation instrumentation. After graduating in 1989 with a physics degree, she returned to CERN for a one-year assignment. As she got more involved in research and development for the large collider experiments, her one year turned into 10. She received a CERN fellowship to help her finish her Ph.D. in physics at the University of Glasgow—which she received in 1997. Her work focused on radiation detectors and their applications in medicine. “Nothing was programmed,” she says of her career trajectory. “It was always an opportunity that came after another opportunity, and things evolved along the way.” A fusion of research and results During her decade at CERN from 1989 to 1999, she contributed to several groundbreaking discoveries. One involved the radiation hardness of silicon sensors at cryogenic temperatures, referred to in physics as the Lazarus effect. In the world of collider experiments, the silicon sensors function as eyes that capture the first moments of particle creation. The sensors are part of a larger detector unit that takes millions of images per second, helping scientists better understand particle creation. In large collider experiments, the silicon sensors suffer significant damage from the radiation generated. After repeated exposure, the sensors eventually become nonfunctional. DaVià’s contributions helped develop the process of reviving the dead detectors by cooling them down to temperatures below -143° C. Her proudest professional accomplishment, she says, was a different discovery at CERN: Her research helped usher in a new era of large collider experiments. For many years, researchers there used planar silicon sensors in collider experiments. But as the large colliders grew more sophisticated and capable, the traditional planar silicon design couldn’t withstand the extreme radiation present at the epicenter of collider collisions. DaVià’s research contributed to the development, together with inventor Sherwood Parker, of 3D silicon sensors that could withstand extreme radiation. The new sensors are radiation-resistant and exceptionally fast, she says. Scientists began replacing planar sensors in the detectors deployed closest to the center of each collision. Planar detectors are still widely used in collider experiments but farther from direct impacts. The development of the 3D silicon sensor was groundbreaking, but DaVià says she is proud of it for a different reason. The collaborative approach of the cross-functional R&D team she built is the most noteworthy outcome, she says. Initially, people with conservative scientific views resisted the idea of creating a new sensor technology, she says. She was able to bring together a broad coalition of scientists, researchers, and industry leaders to work together, despite the initial skepticism and competing interests. The team included two companies that were direct competitors. That type of industry collaboration was unheard of at the time, she says. “I was able to convince them,” she says, “that working together would be the best and fastest way forward.” Her approach succeeded. The two companies not only worked side by side but also exchanged proprietary information. They went so far as to agree that if something halted progress for one of them, it would ship everything to the other so production could continue. DaVià coauthored a book about the project, Radiation Sensors With 3D Electrodes. A focus on sustainable entrepreneurship DaVià has long been concerned about the impact of extreme weather events, especially on underserved populations. Her interest transformed into action after she attended the Effect of Extreme Weather on Women and Under-Represented Groups session held at the Osaka World Expo last August in Kansai, Japan. It was sponsored in part by the IEEE Nuclear and Plasma Sciences Society (NPSS). During the symposium, held in August, panelists shared insights about natural disasters in their regions and identified steps that could help mitigate damage and protect lives. The topics that particularly interested DaVià, she says, were excessive glacial melt in the Himalayas and the lack of tsunami warnings on remote Indonesian islands. One of the ideas that surfaced during a brainstorming session was that of “smart shelters” that could be deployed in remote areas to assist in recovery efforts. The shelters would provide power and a means of communication during outages. The concept was inspired by MOVE, an IEEE-USA initiative. The MOVE program provides communities affected by natural disasters with power and communications capabilities. The services are contained within MOVE vehicles and are powered by generators. A single MOVE vehicle can charge up to 100 phones, bolstering communication capabilities for relief agencies and disaster survivors. DaVià’s knowledge of MOVE guided the evolution of the smart shelter concept. She recognized, however, that the challenge of powering portable shelters needed to be solved. She took the lead and formed a cross-disciplinary team of IEEE members and other professionals to make headway. One result is a planned two-day conference on sustainable entrepreneurship to be held at CERN in October. “IEEE helps bring people together who might not otherwise connect.” The goal of the conference, she says, is to “join the dots across different disciplines by involving as many IEEE societies and external experts as possible to work toward deployable solutions that help improve life for people around the world.” The two-day event will include a competition focusing on solutions for sustainable energy generation and storage systems, she says, adding that entrepreneurs will share their ideas on the second day. Her commitment to developing solutions to mitigate destruction caused by extreme weather led to her involvement with the IEEE Online Forum on Climate Change Technologies. She led the way in creating the Climate Change Initiative within NPSS. She was the driving force behind securing funding for two of the society’s climate-related events. One was the 2024 Climate Workshop on Nuclear and Plasma Solutions for Energy and Society. The second event, building on the success of the first, was last year’s workshop: Nuclear and Plasma Opportunities for Energy and Society, held in November in conjunction with the IEEE Nuclear Science Symposium Medical Imaging Conference and Room-Temperature Semiconductor Detector Conference, held in Yokohama, Japan New paths to guide others DaVià reduced her involvement at CERN, when she joined the faculty at the University of Manchester as a physics professor. In 2016 she joined Stony Brook University as a research professor in the physics and astronomy department. She divides her time between the two schools, where she works on a quantum imaging with x and gamma rays project alongside colleagues at the Brookhaven National Laboratory. She still maintains an office at CERN, where she works with students involved with particle physics. She is also an advisory board member of its IdeaSquare, an innovation space where science, technology, and entrepreneurial minds gather to brainstorm and test ideas. The goal is to identify ways to apply innovations generated by high-energy physics experiments to solve global challenges. DaVià is the radiation detectors and imaging editor of Frontiers in Physics and a cochair of the European Union’s ATTRACT initiative, which promotes radiation imaging research across the continent. She is an active member of the European Physical Society, and she is an IEEE liaison officer for the physics and industry working group of the International Union of Pure and Applied Physics. She has coauthored more than 900 publications. IEEE as the connector DaVià’s involvement with IEEE dates back to her undergraduate years, when she was introduced to the organization at a conference sponsored by the IEEE NPSS. As her career grew, so did her involvement with IEEE. She remains active with the society as a distinguished lecturer. DaVià is the leader of the society’s climate change initiative and a co-organizer of the IEEE Second Presentation Series on Nuclear Energy webinar. She is a member of the IEEE Society of Social Implications of Technology, the IEEE Power & Energy Society, and the IEEE Women in Engineering group. She received the 2022 WIE Outstanding Volunteer of the Year Award. She stays involved in IEEE as the chair of the IEEE Society on Social Implications of Technology’s Sustainability Technical Committee Innovation Engagement and Collaboration Inter-society Committee. This involvement helps her understand the work being done within each society and identify opportunities for cross-collaboration, she says. She sees such synergies as a key benefit of membership. “IEEE helps bring people together who might not otherwise connect,” she says. “We are stronger together with IEEE.” This article was updated on 26 February 2026.
  • Your Watch Will One Day Track Blood Pressure
    Feb 24, 2026 07:00 AM PST
    Your smartwatch can track a lot of things, but at least for now, it can’t keep an accurate eye on your blood pressure. Last week researchers from University of Texas at Austin showed a way your smartwatch someday could. They were able to discern blood pressure by reflecting radio signals off a person’s wrist, and they plan to integrate the electronics that did it into a smartwatch in a couple of years. Beside the tried-and-true blood-pressure cuff, researchers in general have found several new ways to monitor blood pressure using pasted-on ultrasound transducers, electrocardiogram sensors, bioimpedance measurements, photoplethysmography, and combinations of these measurements. “We found that existing methods all face limitations,” Yiming Han, a doctoral candidate in the lab of Yaoyao Jia, told engineers at the IEEE International Solid State Circuits Conference (ISSCC) last week in San Francisco. For example, ultrasound sensing requires long-term contact with the skin. And as cool as electronic tattoos seem, they’re not as convenient or comfortable as a smartwatch. Photoplethysmography, which detects the oxygenation state of blood using light, doesn’t need direct contact, and indeed researchers in Tehran and California recently used it and a heavy dose of machine learning to monitor blood pressure. However, these sensors are thought to be sensitive to a person’s skin tone and were blamed for Black people in the United States getting inadequate treatment during the COVID-19 pandemic. The University of Texas team sought a noncontact solution that was immune to skin-tone bias and could be integrated into a small device. Continuous Blood Pressure Monitoring Blood pressure measurements consist of two readings—systole, the peak pressure when the heart contracts and forces blood into arteries, and diastole, the phase in between heart contractions when pressure drops. During systole, blood vessels expand and stiffen and blood velocity increases. The opposite occurs in diastole. All these changes alter conductivity, dielectric properties, and other tissue properties, so they should show up in reflected near-field radio waves, Jia’s colleague Deji Akinwande reasoned. Near-field waves are radiation impacting a surface that is less than one wavelength from the radiation’s source. The researchers were able to test this idea using a common laboratory instrument called a vector network analyzer. Among its abilities, the analyzer can sense RF reflection, and the team was able to quickly correlate the radio response to blood pressure measured using standard medical equipment. What Akinwande and Jia’s team saw was this: During systole, reflected near-field waves were more strongly out of phase with the transmitted radiation, while in diastole the reflections were weaker and closer to being in phase with the transmission. You obviously can’t lug around a US $50,000 analyzer just to keep track of your blood pressure, so the team created a wearable system to do the job. It consists of a patch antenna strapped to a person’s wrist. The antenna connects to a device called a circulator—a kind of traffic roundabout for radio signals that steers outgoing signals to the antenna and signals coming in from the antenna to a separate circuit. A custom-designed integrated circuit feeds a 2.4-gigahertz microwave signal into one of the circulator’s on-ramps and receives, amplifies, and digitizes the much weaker reflection coming in from another branch. The whole system consumes just 3.4 milliwatts. “Our work is the only one to provide no skin contact and no skin-tone bias,” Han said. The next version of the device will use multiple radio frequencies to increase accuracy, says Jia, “because different people’s tissue conditions are different,” and some might respond better to one or another. Like the 2.4 GHz used in the prototype, these other frequencies will be of the sort already in common use such as 5 GHz (a Wi-Fi frequency) and 915 megahertz (a cellular frequency). Following those experiments, Jia’s team will turn to building the device into a smartwatch form factor and testing them more broadly for possible commercialization.
  • AI’s Math Tricks Don’t Work for Scientific Computing
    Feb 23, 2026 05:00 AM PST
    AI has driven an explosion of new number formats—the ways in which numbers are represented digitally. Engineers are looking at every possible way to save computation time and energy, including shortening the number of bits used to represent data. But what works for AI doesn’t necessarily work for scientific computing, be it for computational physics, biology, fluid dynamics, or engineering simulations. IEEE Spectrum spoke with Laslo Hunhold, who recently joined Barcelona-based Openchip as an AI engineer, about his efforts to develop a bespoke number format for scientific computing. LASLO HUNHOLD Laslo Hunhold is a senior AI accelerator engineer at Barcelona-based startup Openchip. He recently completed a Ph.D. in computer science from the University of Cologne, in Germany. What makes number formats interesting to you? Laslo Hunhold: I don’t know another example of a field that so few are interested in but has such a high impact. If you make a number format that’s 10 percent more [energy] efficient, it can translate to all applications being 10 percent more efficient, and you can save a lot of energy. Why are there so many new number formats? Hunhold: For decades, computer users had it really easy. They could just buy new systems every few years, and they would have performance benefits for free. But this hasn’t been the case for the last 10 years. In computers, you have a certain number of bits used to represent a single number, and for years the default was 64 bits. And for AI, companies noticed that they don’t need 64 bits for each number. So they had a strong incentive to go down to 16, 8, or even 2 bits [to save energy]. The problem is, the dominating standard for representing numbers in 64 bits is not well designed for lower bit counts. So in the AI field, they came up with new formats which are more tailored toward AI. Why does AI need different number formats than scientific computing? Hunhold: Scientific computing needs high dynamic range: You need very large numbers, or very small numbers, and very high accuracy in both cases. The 64-bit standard has an excessive dynamic range, and it is many more bits than you need most of the time. It’s different with AI. The numbers usually follow a specific distribution, and you don’t need as much accuracy. What makes a number format “good”? Hunhold: You have infinite numbers but only finite bit representations. So you need to decide how you assign numbers. The most important part is to represent numbers that you’re actually going to use. Because if you represent a number that you don’t use, you’ve wasted a representation. The simplest thing to look at is the dynamic range. The next is distribution: How do you assign your bits to certain values? Do you have a uniform distribution, or something else? There are infinite possibilities. What motivated you to introduce the takum number format? Hunhold: Takums are based on posits. In posits, the numbers that get used more frequently can be represented with more density. But posits don’t work for scientific computing, and this is a huge issue. They have a high density for [numbers close to one], which is great for AI, but the density falls off sharply once you look at larger or smaller values. People have been proposing dozens of number formats in the last few years, but takums are the only number format that’s actually tailored for scientific computing. I found the dynamic range of values you use in scientific computations, if you look at all the fields, and designed takums such that when you take away bits, you don’t reduce that dynamic range This article appears in the March 2026 print issue as “Laslo Hunhold.”
  • AI for Cybersecurity: Promise, Practice, and Pitfalls
    Feb 23, 2026 03:00 AM PST
    AI is revolutionizing the cybersecurity landscape. From accelerating threat detection to enabling real-time automated responses, artificial intelligence is reshaping how organizations defend against increasingly sophisticated attacks.But with these advancements come new and complex risks—AI systems themselves can be exploited, manipulated, or biased, creating fresh vulnerabilities. In this session, we’ll explore how AI is being applied in real-world cybersecurity scenarios—from anomaly detection and behavioral analytics to predictive threat modeling. We’ll also confront the challenges that come with it, including adversarial AI, data bias, and the ethical dilemmas of autonomous decision-making. Looking ahead, we’ll examine the future of intelligent cyber defense and what it takes to stay ahead of evolving threats. Join us to learn how to harness AI responsibly and effectively—balancing innovation with security, and automation with accountability. Register now for this free webinar!
  • The Age-Verification Trap
    Feb 23, 2026 01:00 AM PST
    Social media is going the way of alcohol, gambling, and other social sins: Societies are deciding it’s no longer kid stuff. Lawmakers point to compulsive use, exposure to harmful content, and mounting concerns about adolescent mental health. So, many propose to set a minimum age, usually 13 or 16. In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law. This is the age-verification trap. Strong enforcement of age rules undermines data privacy. How Does Age Enforcement Actually Work? Most age-restriction laws follow a familiar pattern. They set a minimum age and require platforms to take “reasonable steps” or “effective measures” to prevent underage access. What these laws rarely spell out is how platforms are supposed to tell who is actually over the line. At the technical level, companies have only two tools. The first is identity-based verification. Companies ask users to upload a government ID, link a digital identity, or provide documents that prove their age. Yet in many jurisdictions, 16-year-olds do not have IDs. In others, IDs exist but are not digital, not widely held, or not trustworthy. Storing copies of identity documents also creates security and misuse risks. The second option is inference. Platforms try to guess age based on behavior, device signals, or biometric analysis, most commonly facial age estimation from selfies or videos. This avoids formal ID collection, but it replaces certainty with probability and error. In practice, companies combine both. Self-declared ages are backed by inference systems. When confidence drops, or regulators ask for proof of effort, inference escalates to ID checks. What starts as a light-touch checkpoint turns into layered verification that follows users over time. What Are Platforms Doing Now? This pattern is already visible on major platforms. Meta has deployed facial age estimation on Instagram in multiple markets, using video-selfie checks through third-party partners. When the system flags users as possibly underaged, it prompts them to record a short selfie video. An AI system estimates their age and, if it decides they are under the threshold, restricts or locks the account. Appeals often trigger additional checks, and misclassifications are common. TikTok has confirmed that it also scans public videos to infer users’ ages. Google and YouTube rely heavily on behavioral signals tied to viewing history and account activity to infer age, then ask for government ID or a credit card when the system is unsure. A credit card functions as a proxy for adulthood, even though it says nothing about who is actually using the account. The Roblox games site, which recently launched a new age-estimate system, is already suffering from users selling child-aged accounts to adult predators seeking entry to age-restricted areas, Wired reports. For a typical user, age is no longer a one-time declaration. It becomes a recurring test. A new phone, a change in behavior, or a false signal can trigger another check. Passing once does not end the process. How Do Age-Verification Systems Fail? These systems fail in predictable ways. False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs. The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target. Scale that experience across millions of users, and you bake the privacy risk into how platforms work. Is Age Verification Compatible With Privacy Law? This is where emerging age-restriction policy collides with existing privacy law. Modern data-protection regimes all rest on similar ideas: Collect only what you need, use it only for a defined purpose, and keep it only as long as necessary. Age enforcement undermines all three. To prove they are following age-verification rules, platforms must log verification attempts, retain evidence, and monitor users over time. When regulators or courts ask whether a platform took reasonable steps, “We collected less data” is rarely persuasive. For companies, defending themselves against accusations of neglecting to properly verify age supersedes defending themselves against accusations of inappropriate data collection. It is not an explicit choice by voters or policymakers, but instead a reaction to enforcement pressure and how companies perceive their litigation risk. Less Developed Countries, Deeper Surveillance Outside wealthy democracies, the trade-off is even starker. Brazil’s Statute of Child-rearing and Adolescents (ECA in Portuguese) imposes strong child-protection duties online, while its data-protection law restricts data collection and processing. Now providers operating in Brazil must adopt effective age-verification mechanisms and can no longer rely on self-declaration alone for high-risk services. Yet they also face uneven identity infrastructure and widespread device sharing. To compensate, they rely more heavily on facial estimation and third-party verification vendors. In Nigeria many users lack formal IDs. Digital service providers fill the gap with behavioral analysis, biometric inference, and offshore verification services, often with limited oversight. Audit logs grow, data flows expand, and the practical ability of users to understand or contest how companies infer their age shrinks accordingly. Where identity systems are weak, companies do not protect privacy. They bypass it. The paradox is clear. In countries with less administrative capacity, age enforcement often produces more surveillance, not less, because inference fills the void of missing documents. How Do Enforcement Priorities Change Expectations? Some policymakers assume that vague standards preserve flexibility. In the U.K., then–Digital Secretary Michelle Donelan, argued in 2023 that requiring certain online safety outcomes without specifying the means would avoid mandating particular technologies. Experience suggests the opposite. When disputes reach regulators or courts, the question is simple: Can minors still access the platform easily? If the answer is yes, authorities tell companies to do more. Over time, “reasonable steps” become more invasive. Repeated facial scans, escalating ID checks, and long-term logging become the norm. Platforms that collect less data start to look reckless by comparison. Privacy-preserving designs lose out to defensible ones. This pattern is familiar, including online sales-tax enforcement. After courts settled that large platforms had an obligation to collect and remit sales taxes, companies began continuous tracking and storage of transaction destinations and customer location signals. That tracking is not abusive, but once enforcement requires proof over time, companies build systems to log, retain, and correlate more data. Age verification is moving the same way. What begins as a one-time check becomes an ongoing evidentiary system, with pressure to monitor, retain, and justify user-level data. The Choice We Are Avoiding None of this is an argument against protecting children online. It is an argument against pretending there is no trade-off. Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: Many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone. The age-verification trap is not a glitch. It is what you get when regulators treat age enforcement as mandatory and privacy as optional.
  • Andrew Ng: Unbiggen AI
    Feb 09, 2022 07:31 AM PST
    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
  • How AI Will Change Chip Design
    Feb 08, 2022 06:00 AM PST
    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
  • Atomically Thin Materials Significantly Shrink Qubits
    Feb 07, 2022 08:12 AM PST
    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.