Engineering Community Portal

MERLOT Engineering
Share

Welcome  – From the Editor

Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.

As you scroll this page, you will find many Engineering resources.  This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.  

Showcase

Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy.  Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics. 

Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online.  Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned. 

Made possible under a Department of Labor grant.  Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date. 

Engineering Technology 3D Simulations in MERLOT

New Materials

New Members

Engineering on the Web

  • Israeli engineering school ranked 5th in world for its handling of COVID - Israel Hayom
    Aug 08, 2022 03:51 AM PDT
  • Is Altair Engineering (NASDAQ:ALTR) A Risky Investment? - Simply Wall St News
    Aug 08, 2022 03:45 AM PDT
  • Mallouk named 2022 National KEEN Rising Star - Rowan Today
    Aug 08, 2022 03:15 AM PDT
  • Worldwide Geotechnical Engineering & Design Software Market Report 2022 - Benzinga
    Aug 08, 2022 03:03 AM PDT
  • Being our true self is invaluable, but how authentic can we actually be in the workplace?
    Aug 08, 2022 02:55 AM PDT
  • MyRechemical awarded engineering contract for a waste to methanol and hydrogen plant
    Aug 08, 2022 02:49 AM PDT
  • UTSA Knowledge Enterprise awards annual seed grants to expand faculty research
    Aug 08, 2022 02:39 AM PDT
  • ACHEMA 2022 preview: Sulzer Chemtech | Hydrocarbon Engineering
    Aug 08, 2022 02:27 AM PDT
  • IMechE Malaysia Presents Students Awards at University of Nottingham Malaysia
    Aug 08, 2022 02:23 AM PDT
  • Worldwide Geotechnical Engineering & Design Software Market Report 2022
    Aug 08, 2022 01:41 AM PDT
  • Worldwide Geotechnical Engineering & Design Software Market Report 2022 - Yahoo Finance
    Aug 08, 2022 01:35 AM PDT
  • RIT Dubai graduate fuses careers in art and engineering - News | Khaleej Times
    Aug 08, 2022 01:22 AM PDT
  • Cellular SoC Silicon Validation Engineering Program Manager - Careers at Apple
    Aug 08, 2022 01:11 AM PDT
  • Japanese partners create program to promote digital engineering for maritime - Safety4Sea
    Aug 08, 2022 01:07 AM PDT
  • Embankment stabilisation and repairs started on collapsed Wokingham road - Ground Engineering
    Aug 08, 2022 01:03 AM PDT
  • Japanese majors team up on digital engineering technology - Offshore Energy
    Aug 08, 2022 12:26 AM PDT
  • Genetic Engineering Drug Market Report 2022: Expanding Research Areas for Growth by 2031
    Aug 08, 2022 12:15 AM PDT
  • Engineering Emmys Announced – Who Were The Biggest Winners
    Aug 07, 2022 11:12 PM PDT
  • Japanese shipping technology leaders in collaboration to develop digital engineering ...
    Aug 07, 2022 11:11 PM PDT
  • Genome Engineering Market: Rising Demand for Synthetic Genes and Technological ... - BioSpace
    Aug 07, 2022 10:55 PM PDT
  • Amazon to Acquire iRobot For $1.7 Billion
    Aug 05, 2022 12:00 PM PDT
    This morning, Amazon and iRobot announced “a definitive merger agreement under which Amazon will acquire iRobot” for US $1.7 billion. The announcement was a surprise, to put it mildly, and we’ve barely had a chance to digest the news. But taking a look at what’s already known can still yield initial (if incomplete) answers as to why Amazon and iRobot want to team up—and whether the merger seems like a good idea. The press release, like most press releases about acquisitions of this nature, doesn’t include much in the way of detail. But here are some quotes: “We know that saving time matters, and chores take precious time that can be better spent doing something that customers love,” said Dave Limp, SVP of Amazon Devices. “Over many years, the iRobot team has proven its ability to reinvent how people clean with products that are incredibly practical and inventive—from cleaning when and where customers want while avoiding common obstacles in the home, to automatically emptying the collection bin. Customers love iRobot products—and I'm excited to work with the iRobot team to invent in ways that make customers' lives easier and more enjoyable.” “Since we started iRobot, our team has been on a mission to create innovative, practical products that make customers' lives easier, leading to inventions like the Roomba and iRobot OS,” said Colin Angle, chairman and CEO of iRobot. “Amazon shares our passion for building thoughtful innovations that empower people to do more at home, and I cannot think of a better place for our team to continue our mission. I’m hugely excited to be a part of Amazon and to see what we can build together for customers in the years ahead.” There’s not much to go on here, and iRobot has already referred us to Amazon PR, which, to be honest, feels like a bit of a punch in the gut. I love (loved?) so many things about iRobot—their quirky early history working on weird DARPA projects and even weirder toys, everything they accomplished with the PackBot (and also this), and most of all, the fact that they’ve made a successful company building useful and affordable robots for the home, which is just…it’s so hard to do that I don’t even know where to start. And nobody knows what’s going to happen to iRobot going forward. I’m sure iRobot and Amazon have all kinds of plans and promises and whatnot, but still—I’m now nervous about iRobot’s future. Why this is a good move for Amazon is clear, but what exactly is in it for iRobot? It seems fairly obvious why Amazon wanted to get its hands on iRobot. Amazon has been working for years to integrate itself into homes, first with audio systems (Alexa), and then video (Ring), and more recently some questionable home robots of its own, like its indoor security drone and Astro. Amazon clearly needs some help in understanding how to make home robots useful, and iRobot can likely provide some guidance, with its extraordinarily qualified team of highly experienced engineers. And needless to say, iRobot is already well established in a huge number of homes, with brand recognition comparable to something like Velcro or Xerox, in the sense that people don’t have “robot vacuums,” they have Roombas. All those Roombas in all of those homes are also collecting a crazy amount of data for iRobot. iRobot itself has been reasonably privacy-sensitive about this, but it would be naïve not to assume that Amazon sees a lot of potential for learning much, much more about what goes on in our living rooms. This is more concerning, because Amazon has its own ideas about data privacy, and it’s unclear what this will mean for increasingly camera-reliant Roombas going forward. I get why this is a good move for Amazon, but I must admit that I’m still trying to figure out what exactly is in it for iRobot, besides of course that “$61 per share in an all-cash transaction valued at approximately $1.7 billion.” Which, to be fair, seems like a heck of a lot of money. Usually when these kinds of mergers happen (and I’m thinking back to Google acquiring all those robotics companies in 2013), the hypothetical appeal for the robotics company is that suddenly they have a bunch more resources to spend on exciting new projects along with a big support structure to help them succeed. It’s true that iRobot has apparently had some trouble with finding ways to innovate and grow, with their biggest potential new consumer product (the Terra lawn mower) having been on pause since 2020. It could be that big pile of cash, plus not having to worry so much about growth as a publicly traded company, plus some new Amazon-ish projects to work on could be reason enough for this acquisition. My worry, though, is that iRobot is just going to get completely swallowed into Amazon and effectively cease to exist in a meaningful and unique way. I hope that the relationship between Amazon and iRobot will be an exception to this historical trend. Plus, there is some precedent for this—Boston Dynamics, for example, has survived multiple acquisitions while keeping its technology and philosophy more or less independent and intact. It’ll be on iRobot to very aggressively act to preserve itself, and keeping Colin Angle as CEO is a good start. We’ll be trying to track down more folks to talk to about this over the coming weeks for a more nuanced and in-depth perspective. In the meantime, make sure to give your Roomba a hug—it’s been quite a day for little round robot vacuums.
  • Video Friday: Build a Chair
    Aug 05, 2022 08:44 AM PDT
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IEEE CASE 2022: 20–24 August 2022, MEXICO CITY CLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELES CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today's videos! This probably counts as hard mode for Ikea chair assembly. [ Naver Lab ] As anyone working with robotics knows, it’s mandatory to spend at least 10 percent of your time just mucking about with them because it’s fun, as GITAI illustrates with its new 10-meter robotic arm. [ GITAI ] Well, this is probably the weirdest example of domain randomization in simulation for quadrupeds that I’ve ever seen. [ RSL ] The RoboCup 2022 was held in Bangkok, Thailand. The final match was between B-Human from Bremen (jerseys in black) and HTWK Robots from Leipzig (jerseys in blue). The video starts with one of our defending robots starting a duel with the opponent. After a short time a pass is made to another robot, which tries to score a goal, but the opponent goalie is able to catch the ball. Afterwards another attacker robot is already waiting at the center circle, to take its chance to score a goal, through all four opponent robots. [ Team B-Human ] The mission to return Martian samples back to Earth will see a European 2.5-meter-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for a historic interplanetary delivery. [ ESA ] I still cannot believe that this is an approach to robotic fruit-picking that actually works. [ Tevel Aerobotics ] This video shows the basic performance of the humanoid robot Torobo, which is used as a research platform for JST’s Moonshot R&D program. [ Tokyo Robotics ] Volocopter illustrates why I always carry two violins with me everywhere. You know, just in case. [ Volocopter ] We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning. Developing algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task. [ Hybrid Robotics ] I will always love watching Cassie try very, very hard to not fall over, and then fall over. <3 [ Michigan Robotics ] I don’t think this paper is about teaching bipeds to walk with attitude, but it should be. [ DLG ] Modboats are capable of collective swimming in arbitrary configurations! In this video you can see three different configurations of the Modboats swim across our test space and demonstrate their capabilities. [ ModLab ] How have we built our autonomous driving technology to navigate the world safely? It comes down to three easy steps: Sense, Solve, and Go. Using a combination of lidar, camera, radar, and compute, the Waymo Driver can visualize the world, calculate what others may do, and proceed smoothly and safely, day and night. [ Waymo ] Alan Alda discusses evolutionary robotics with Hod Lipson and Jordan Pollack on Scientific American Frontiers in 1999. [ Creative Machines Lab ] Brady Watkins gives us insight into how a big company like Softbank Robotics looks into the robotics market. [ Robohub ]
  • Who Actually Owns Tesla’s Data?
    Aug 05, 2022 08:26 AM PDT
    On 29 September 2020, a masked man entered a branch of the Wells Fargo bank in Washington, D.C., and handed the teller a note: “This is a robbery. Act calm give me all hundreds.” The teller complied. The man then fled the bank and jumped into a gray Tesla Model S. This was one of three bank robberies the man attempted the same day. When FBI agents began investigating, they reviewed Washington, D.C.’s District Department of Transportation camera footage, and spotted a Tesla matching the getaway vehicle’s description. The license plate on that car showed that it was registered to Exelorate Enterprises LLC, the parent company of Steer EV—a D.C.-based monthly vehicle-subscription service. Agents served a subpoena on Steer EV for the renter’s billing and contact details. Steer EV provided those—and also voluntarily supplied historical GPS data for the vehicle. The data showed the car driving between, and parking at, each bank at the time of the heists. The renter was arrested and, in September, sentenced to four years in prison. “If an entity is collecting, retaining, [and] sharing historical location data on an individualized level, it’s extraordinarily difficult to de-identify that, verging on impossible.” —John Verdi, Future of Privacy Forum In this case, the GPS data likely came from a device Steer EV itself installed in the vehicle (neither Steer nor Tesla responded to interview requests). However, according to researchers, Tesla is potentially in a position to provide similar GPS tracks for many of its 3 million customers. For Teslas built since mid-2017, “every time you drive, it records the whole track of where you drive, the GPS coordinates and certain other metrics for every mile driven,” says Green, a Tesla owner who has reverse engineered the company’s Autopilot data collection. “They say that they are anonymizing the trigger results,” but, he says, “you could probably match everything to a single person if you wanted to.” Each of these trip logs, and other data “snapshots” captured by the Autopilot system that include images and video, is stripped of its identifying VIN and given a temporary, random ID number when it is uploaded to Tesla, says Green. However, he notes, that temporary ID can persist for days or weeks, connecting all the uploads made during that time. Elon Musk, CEO of Tesla MotorsMark Mahaney/Redux Given that some trip logs will also likely record journeys between a driver’s home, school, or place of work, guaranteeing complete anonymity is unrealistic, says John Verdi, senior vice president of policy at the Future of Privacy Forum: “If an entity is collecting, retaining, [and] sharing historical location data on an individualized level, it’s extraordinarily difficult to de-identify that, verging on impossible.” Tesla, like all other automakers, has a policy that spells out what it can and cannot do with the data it gets from customers’ vehicles, including location information. This states that while the company does not sell customer and vehicle data, it can share that data with service providers, business partners, affiliates, some authorized third parties, and government entities according to the law. Owners can buy a special kit for US $1,400 that allows them to access data on their own car's event data recorder, but this represents just a tiny subset of the data the company collects, and is related only to crashes. Owners living in California and Europe benefit from legislation that means Tesla will provide access to more data generated by their vehicles, although not the Autopilot snapshots and trip logs that are supposedly anonymized. Once governments realize that a company possesses such a trove of information, it could be only a matter of time before they seek access to it. “If the data exists…and in particular exists in the domain of somebody who’s not the subject of those data, it’s much more likely that a government will eventually get access to them in some way,” says Bryant Walker Smith, an associate professor in the schools of law and engineering at the University of South Carolina. “Individuals ought to think about their cars more like they think about their cellphones.” —John Verdi, Future of Privacy Forum This is not necessarily a terrible thing, Walker says, who suggests that such rich data could unlock valuable insights into which roads or intersections are dangerous. The wealth of data could also surface subtle problems in the vehicles themselves. In many ways, the data genie is already out of the bottle, according to Verdi. “Individuals ought to think about their cars more like they think about their cellphones,” he says. “The auto industry has a lot to learn from the ways that mobile-phone operating systems handle data permissions…. Both iOS and Android have made great strides in recent years in empowering consumers when it comes to data collection, data disclosure, and data use.” Tesla permits owners to control some data sharing, including Autopilot and road segment analytics. If they want to opt out of data collection completely, they can ask Tesla to disable the vehicle’s connectivity altogether. However, this would mean losing features such as remote services, Internet radio, voice commands, and Web browser functionality, and even safety-related over-the-air updates. Green says he is not aware of anyone who has successfully undergone this nuclear option. The only real way to know you’ve prevented data sharing, he says, is to “go to a repair place and ask them to remove the modem out of the car.” Tesla almost certainly has the biggest empire of customer and vehicle data among automakers. It also appears to be the most aggressive in using that data to develop its automated driving systems, and to protect its reputation in the courts of law and public opinion, even to the detriment of some of its customers. But while the world’s most valuable automaker dominates the discussion around connected cars, others are not far behind. Elon Musk’s insight—to embrace the data-driven world that our other digital devices already inhabit—is rapidly becoming the industry standard. When our cars become as powerful and convenient as our phones, it is hardly surprising that they suffer the same challenges around surveillance, privacy, and accountability.
  • Digging Into the New QD-OLED TVs
    Aug 04, 2022 01:30 PM PDT
    Televisions and computer monitors with QD-OLED displays are now on store shelves. The image quality is—as expected—impressive, with amazing black levels, wide viewing angles, a broad color gamut, and high brightness. The products include: Samsung’s S95B TV in 55-inch (US $2,200) and 65-inch ($3,000) sizes Sony’s A95K TV in 55-inch ($3,000) and 65-inch ($4,000) sizes Alienware’s 34-inch gaming monitor ($1,300) All these products use display panels manufactured by Samsung but have their own unique display assembly, operating system, and electronics. I took apart a 55-inch Samsung S95B to learn just how these new displays are put together (destroying it in the process). I found an extremely thin OLED backplane that generates blue light with an equally thin QD color-converting structure that completes the optical stack. I used a UV light source, a microscope, and a spectrometer to learn a lot about how these displays work. A few surprises: The pixel layout is unique. Instead of being evenly arrayed, the green quantum dots form their own line, separate from the blue and red [see photo, above]. (The blue pixels draw their light directly from the OLED panel, the red and green pixels are lit by quantum dots.) The bandwidth of the native QD emission is so narrow (resulting in a very wide color gamut, that is, the range of colors that can be produced, generally a good thing) that some content doesn’t know how to handle it. So the TV “compresses” the gamut in some cases by adding off-primary colors to bring its primary color points in line with more common gamuts. This is especially dramatic with green, where “pure” green actually contains a significant amount of added red and a small amount of added blue. While taking this thing apart was no easy task, and deconstruction cracked the screen, I was surprised at how easily the QD frontplane and the OLED backplane could be separated. It was easier than splitting an Oreo in half. [See video, below.] As for the name of this technology, Samsung has used the branding OLED, QD Display, and QD-OLED, while Sony is just using OLED. Alienware uses QD-OLED to describe the new tech (as do most in the display industry). —Peter Palomaki Story from January 2022 follows: For more than a decade now, OLED (organic light-emitting diode) displays have set the bar for screen quality, albeit at a price. That’s because they produce deep blacks, offer wide viewing angles, and have a broad color range. Meanwhile, QD (quantum dot) technologies have done a lot to improve the color purity and brightness of the more wallet-friendly LCD TVs. In 2022, these two rival technologies will merge. The name of the resulting hybrid is still evolving, but QD-OLED seems to make sense, so I’ll use it here, although Samsung has begun to call its version of the technology QD Display. To understand why this combination is so appealing, you have to know the basic principles behind each of these approaches to displaying a moving image. In an LCD TV, the LED backlight, or at least a big section of it, is on all at once. The picture is created by filtering this light at the many individual pixels. Unfortunately, that filtering process isn’t perfect, and in areas that should appear black some light gets through. In OLED displays, the red, green, and blue diodes that comprise each pixel emit light and are turned on only when they are needed. So black pixels appear truly black, while bright pixels can be run at full power, allowing unsurpassed levels of contrast. But there’s a drawback. The colored diodes in an OLED TV degrade over time, causing what’s called “burn-in.” And with these changes happening at different rates for the red, green, and blue diodes, the degradation affects the overall ability of a display to reproduce colors accurately as it ages and also causes “ghost” images to appear where static content is frequently displayed. Adding QDs into the mix shifts this equation. Quantum dots—nanoparticles of semiconductor material—absorb photons and then use that energy to emit light of a different wavelength. In a QD-OLED display, all the diodes emit blue light. To get red and green, the appropriate diodes are covered with red or green QDs. The result is a paper-thin display with a broad range of colors that remain accurate over time. These screens also have excellent black levels, wide viewing angles, and improved power efficiency over both OLED and LCD displays. Samsung is the driving force behind the technology, having sunk billions into retrofitting an LCD fab in Tangjeong, South Korea, for making QD-OLED displays While other companies have published articles and demonstrated similar approaches, only Samsung has committed to manufacturing these displays, which makes sense because it holds all of the required technology in house. Having both the OLED fab and QD expertise under one roof gives Samsung a big leg up on other QD-display manufacturers., Samsung first announced QD-OLED plans in 2019, then pushed out the release date a few times. It now seems likely that we will see public demos in early 2022 followed by commercial products later in the year, once the company has geared up for high-volume production. At this point, Samsung can produce a maximum of 30,000 QD-OLED panels a month; these will be used in its own products. In the grand scheme of things, that’s not that much. Unfortunately, as with any new display technology, there are challenges associated with development and commercialization. For one, patterning the quantum-dot layers and protecting them is complicated. Unlike QD-enabled LCD displays (commonly referred to as QLED) where red and green QDs are dispersed uniformly in a polymer film, QD-OLED requires the QD layers to be patterned and aligned with the OLEDs behind them. And that’s tricky to do. Samsung is expected to employ inkjet printing, an approach that reduces the waste of QD material. Another issue is the leakage of blue light through the red and green QD layers. Leakage of only a few percent would have a significant effect on the viewing experience, resulting in washed-out colors. If the red and green QD layers don’t do a good job absorbing all of the blue light impinging on them, an additional blue-blocking layer would be required on top, adding to the cost and complexity. Another challenge is that blue OLEDs degrade faster than red or green ones do. With all three colors relying on blue OLEDs in a QD-OLED design, this degradation isn’t expected to cause as severe color shifts as with traditional OLED displays, but it does decrease brightness over the life of the display. Today, OLED TVs are typically the most expensive option on retail shelves. And while the process for making QD-OLED simplifies the OLED layer somewhat (because you need only blue diodes), it does not make the display any less expensive. In fact, due to the large number of quantum dots used, the patterning steps, and the special filtering required, QD-OLED displays are likely to be more expensive than traditional OLED ones—and way more expensive than LCD TVs with quantum-dot color purification. Early adopters may pay about US $5,000 for the first QD-OLED displays when they begin selling later this year. Those buyers will no doubt complain about the prices—while enjoying a viewing experience far better than anything they’ve had before.
  • Tesla’s Autopilot Depends on a Deluge of Data
    Aug 04, 2022 01:28 PM PDT
    In 2019, Elon Musk stood up at a Tesla day devoted to automated driving and said, “Essentially everyone’s training the network all the time, is what it amounts to. Whether Autopilot’s on or off, the network is being trained.” Tesla’s suite of assistive and semi-autonomous technologies, collectively known as Autopilot, is among the most widely deployed—and undeniably the most controversial—driver-assistance systems on the road today. While many drivers love it, using it for a combined total of more than 5 billion kilometers, the technology has been involved in hundreds of crashes, some of them fatal, and is currently the subject of a comprehensive investigation by the National Highway Traffic Safety Administration. This second story—in IEEE Spectrum’s series of three on Tesla’s empire of data (story 1; story 3)—focuses on how Autopilot rests on a foundation of data harvested from the company’s own customers. Although the company’s approach has unparalleled scope and includes impressive technological innovations, it also faces particular challenges—not least of which is Musk’s decision to widely deploy the misleadingly named Full Self-Driving feature as a largely untested beta. “Right now, automated vehicles are one to two magnitudes below human drivers in terms of safety performance.” —Henry Liu, Mcity Most companies working on automated driving rely on a small fleet of highly instrumented test vehicles, festooned with high-resolution cameras, radars, and laser-ranging lidar devices. Some of these have been estimated to generate 750 megabytes of sensor data every second, providing a rich seam of training data for neural networks and other machine-learning systems to improve their driving skills. Such systems have now effectively solved the task of everyday driving, including for a multitude of road users, different weather conditions, and road types, says Henry Liu, director of Mcity, a public-private mobility research partnership at the University of Michigan. “But right now, automated vehicles are one to two magnitudes below human drivers in terms of safety performance,” says Liu. “And that’s because current automated vehicles can’t handle the curse of rarity: low-frequency, long-tail, safety-critical events that they just don’t see enough to know how to handle.” Think of a deer suddenly springing into the road, or a slick of spilled fuel. Tesla’s bold bet is that its own customers can provide the long tail of data needed to boost self-driving cars to superhuman levels of safety. Above and beyond their contractual obligations, many are happy to do so—seeing themselves as willing participants in the development of technology that they have been told will one day soon allow them to simply sit back and enjoy being driven by the car itself. For a start, the routing information for every trip undertaken in a recent model Autopilot-equipped Tesla is shared with the company—see the the previous installment in this series. But Tesla’s data effort goes far beyond navigation. In autonomy presentations over the past few years, Musk and Tesla’s then-head of AI, Andrej Karpathy, detailed the company’s approach, including its so-called Shadow Mode. Philipp Mandler/Unsplash In Shadow Mode, operating on Tesla vehicles since 2016, if the car’s Autopilot computer is not controlling the car, it is simulating the driving process in parallel with the human driver. When its own predictions do not match the driver’s behavior, this might trigger the recording of a short “snapshot” of the car’s cameras, speed, acceleration, and other parameters for later uploading to Tesla. Snapshots are also triggered when a Tesla crashes. After the snapshots are uploaded, a team may review them to identify human actions that the system should try to imitate, and input them as training data for its neural networks. Or they may notice that the system is failing, for instance, to properly identify road signs obscured by trees. In that case, engineers can train a detector designed specifically for this scenario and download it to some or all Tesla vehicles. “We can beam it down to the fleet, and we can ask the fleet to please apply this detector on top of everything else you’re doing,” said Karpathy in 2020. If that detector thinks it spots such a road sign, it will capture images from the car’s cameras for later uploading, His team would quickly receive thousands of images, which they would use to iterate the detector, and eventually roll it out to all production vehicles. “I’m not exactly sure how you build out a data set like this without the fleet,” said Karpathy. An amateur Tesla hacker who tweets using the pseudonym Green told Spectrum that he identified over 900 Autopilot test campaigns, before the company stopped numbering them in 2019. For all the promise of Tesla’s fleet learning, Autopilot has yet to prove that it can drive as safely as a human, let alone be trusted to operate a vehicle without supervision. Liu is bullish on Tesla’s approach to leveraging its ever-growing consumer base. “I don’t think a small…fleet will ever be able to handle these [rare] situations,” he says. “But even with these shadow drivers—and if you deploy millions of these fleet vehicles, that’s a very, very large data collection—I don’t know whether Tesla is fully utilizing them because there’s no public information really available.” One obstacle is the sheer cost. Karpathy admitted that having a large team to assess and label images and video was expensive and said that Tesla was working on detectors that can train themselves on video clips captured in Autopilot snapshots. In June, the company duly laid off 195 people working on data annotation at a Bay Area office. While the Autopilot does seem to have improved over the years, with Tesla allowing its operation on more roads and in more situations, serious and fatal accidents are still occurring. These may or may not have purely technical causes. Certainly, some drivers seem to be overestimating the system’s capabilities or are either accidentally or deliberately failing to supervise it sufficiently. Other experts are worried that Tesla’s approach has more fundamental flaws. “The vast majority of the world generally believes that you’re never going to get the same level of safety with a camera-only system that you will based on a system that includes lidar,” says Dr. Matthew Weed, senior director of product management at Luminar, a company that manufacturers advanced lidar systems. He points out that Tesla’s Shadow Mode only captures a small fraction of each car’s driving time. “When it comes to safety, the whole thing is about…your unknown unknowns,” he says. “What are the things that I don’t even know about that will cause my system to fail? Those are really difficult to ascertain in a bulk fleet” that is down-selecting data. For all the promise of Tesla’s fleet learning and the enthusiastic support of many of its customers, Autopilot has yet to prove that it can drive as safely as a human, let alone be trusted to operate a vehicle without supervision. And there are other difficulties looming. Andrej Karpathy left Tesla in mid-July, while the company continues to face the damaging possibility of NHTSA issuing a recall for Autopilot in the United States. This would be a terrible PR (and possibly economic) blow for the company but would likely not halt its harvesting of customer data to improve the system, nor prevent its continued deployment overseas. Tesla’s use of fleet vehicle data to develop Autopilot echoes the user-fueled rise of Internet giants like Google, YouTube, and Facebook. The more its customers drive, so Musk’s story goes, the better the system performs. But just as tech companies have had to come to terms with their complicated relationships with data, so Tesla is beginning to see a backlash. Why does the company charge US $12,000 for a so-called “full self-driving” capability that is utterly reliant on its customers’ data? How much control do drivers have over data extracted from their daily journeys? And what happens when other entities, from companies to the government, seek access to it? These are the themes for our third story.
  • Rhode Island’s Renewable Energy Goal Is a Beacon for Other States
    Aug 04, 2022 11:14 AM PDT
    Early in July, Rhode Island’s governor signed legislation mandating that the state acquire 100 percent of its electricity from renewable sources by 2033. Among the state’s American peers, there’s no deadline more ambitious. “Anything more ambitious, and I would start being a little skeptical that it would be attainable,” says Seaver Wang, a climate and energy researcher at the Breakthrough Institute. It is true that Rhode Island is small. It is also true that the state’s conditions make it riper for such a timeframe than most of the country. But watching this tiny state go about its policy business, analysts say, might show other states how to light their own ways into a renewable future. Rhode Island’s 2033 deadline comes in the form of a renewable-energy standard, setting a goal that electricity providers must meet by collecting a certain number of certificates. Electricity providers can earn those certificates by generating electricity from renewable sources themselves; alternatively, they can buy certificates from other providers. (Numerous other states have similar standards—Rhode Island’s current standard is actually an upgrade to an older standard—and policy wonks have mooted a national standard.) Today, it might seem a bit optimistic to pin hopes for renewable energy on a state that still gets 89 percent of its electricity from natural gas. Much of the meager wind power that does exist comes either from other states or from the 30-megawatt Block Island Wind Farm—the first offshore wind farm in the United States—which consists of just five turbines and only came online in 2016. But Rhode Island plans to fill the gap with as much as 600 megawatts of new wind power. To aid this effort, it has partnered with Ørsted, which could bring a critical mass of turbine expertise from Europe, where the sector is far more advanced. “I think that adds greatly to the likelihood of [Rhode Island’s] success,” says Morgan Higman, a clean-energy researcher at the Center for Strategic and International Studies, in Washington, D.C. The policies in the package are, indeed, quite specific to Rhode Island’s position. Not only is it one of the least populous states in the United States, it already has about the lowest per capita energy consumption in the country. Moreover, powering a service-oriented economy, Rhode Island’s grid doesn’t have to accommodate many energy-intensive manufacturing firms. That makes that 2033 goal all the more achievable. “It’s better to have attainable goals and focus on a diverse portfolio of policies to promote clean energy advancement, rather than sort of rush to meet what is essentially…a bit of a PR goal,” says Wang. That Rhode Island is going all-in on something this maritime state might have in abundance—offshore wind—offers another lesson. Higman says it’s a good example of using a state’s own potential resources. Moreover, the partnership with Ørsted might help the state harness helpful expertise. In similar fashion, Texans could choose to double down on that state’s own wind-power portfolio. New Mexico could potentially shape a renewable-energy supply from its bountiful sunlight. Doing this sort of thing, Higman says, “is the fastest way that we see states accelerate renewable-energy deployment.” Rhode Island’s policy does leave some room for improvement. Its focus on renewables looks past New England’s largest source of carbon-free energy: fission. Just two nuclear power plants (Millstone in Connecticut and Seabrook in New Hampshire) pump out more than a fifth of the region’s electricity. A more inclusive policy might take note and incentivize nuclear power, too. Perhaps most important, any discussion of energy policy should note that Rhode Island’s grid doesn’t exist in a vacuum; it’s linked in with the grids of its surrounding states in New England, New York, and beyond. (Indeed, it has repeatedly partnered on setting goals and building new offshore wind power.) If neighboring states implement similarly aggressive standards without actually building new energy capacity, then there’s a chance that when all the renewable energy certificates are bought out, some states won’t have any renewable energy left. But analysts are optimistic that Rhode Island can do the job. “Rhode Island does deserve some kudos for this policy,” says Wang. “It’s really tempting to applaud states for their goals. This is a useful example of where setting a goal is not very meaningful,” adds Higman. “Identifying the means and strategies and technologies to achieve that goal is the most important thing. And Rhode Island has done that.”
  • Inventor of AT&T’s Datakit, the First Virtual Connection Switch, Dies at 85
    Aug 04, 2022 11:00 AM PDT
    Alexander “Sandy” Fraser Developer of the first virtual circuit network switch Fellow, 85; died 13 June Fraser developed the Datakit, the first virtual circuit network switch, while working at AT&T Labs in Florham Park, N.J. The telecommunications technology is used by all major U.S. telephone companies. He invented other pioneering technologies as well, including the file system for the Titan supercomputer (prototype of Atlas 2), cell-based networks (precursor to asynchronous transfer mode), and the Euphony processor, which was one of the first system-on-a-chip microprocessors. He began his career at Ferranti, an electrical engineering and equipment company in Manchester, England. He left there in 1966 to join the University of Cambridge as an assistant director of research. After three years, he moved to the United States to work for AT&T Bell Labs in Holmdel, N.J. While there, he helped develop the Moving Picture Experts Group Advanced Audio Coder, which compresses music signals. First used in Apple’s iTunes program, it now can be found in all smartphones. Fraser held several leadership positions at the company during his 30 years there. He became director of the Computing Science Research Center in 1982 and five years later was promoted to executive director. In 1994 he became associate vice president for the company’s information science research department. In 1996 he helped establish AT&T Labs in Florham Park. It is the company’s research and development division, of which he was vice president for two years. He decided to focus more on research and left his position as vice president. AT&T named him chief scientist, and in that position he worked on developing architecture and protocols for a large-scale Internet so that customers could connect to it from their homes. In 2002 Fraser retired and founded Fraser Research, in Princeton, N.J., where he continued his networking work. He earned his bachelor’s degree in aeronautical engineering in 1958 from the University of Bristol, in England. He went on to receive a Ph.D. in computing science in 1969 from Cambridge. Byung-Gook Park Vice chair of IEEE Region 10 Fellow, 62; died 20 May Park was an active IEEE volunteer and was serving as the 2021–2022 vice chair of IEEE Region 10 at the time of his death. He was the 2014–2015 chair of the IEEE Seoul Section. He was a member of several committees at conferences including the IEEE International Electron Devices Meeting, the International Conference on Solid State Devices and Materials, and the International Technical Conference on Circuits/ Systems, Computers, and Communications. He served as editor of IEEE Electron Device Letters and editor in chief of the Journal of Semiconductor Technology and Science. From 1990 to 1993, he worked at AT&T Bell Labs in Murray Hill, N.J., before joining Texas Instruments in Dallas. After one year, he left the company and joined Seoul National University as an assistant professor of electrical and computer engineering. He worked at the university at the time of his death. His research interests included the design and fabrication of neuromorphic devices and circuits, flash memories, and silicon quantum devices. Park authored or coauthored more than 1,200 research papers. He was granted 107 patents in Korea and 46 U.S. patents. He received bachelor’s and master’s degrees in electronics engineering from Seoul National University in 1982 and 1984, respectively, and a Ph.D. in EE in 1990 from Stanford. David Ellis Hepburn Past vice chair of IEEE Canada’s Teacher-in-Service Program Life senior member, 91; died 25 March Hepburn was a strong proponent of preuniversity education and enjoyed helping shape the next generation of engineers. He was involved with IEEE Canada’s Teacher-in-Service Program, an initiative that aims to improve elementary and secondary school technical education by offering teachers lesson plans and training workshops. He served as vice chair of the program’s committee. He was honored for his contributions with the 2017 IEEE Canada Presidents’ Make-a-Difference Award. He was an active volunteer for TryEngineering, a website that provides teachers, parents, and students with engineering resources. These include hands-on classroom activities, lesson plans, and information about engineering careers and university programs. He wrote six lessons, which cover transformers, AC and DC motors, magnetism, binary basics, and solar power. While a student at Staffordshire University, in England, he was an intern at electrical equipment manufacturer English Electric in Stafford. Five years after graduating in 1952, he joined utility company Hydro-Québec in Montreal as a systems design engineer. In 1965 he went to work for consulting firm Acres International in Montreal. His first assignment there was with the design and construction team for the Churchill Falls underground hydroelectric power station, in Labrador, Nfld. In 1969 he was tasked with helping to build transmission lines in Bangladesh that connected the country’s eastern and western electrical grids. He and his family lived there for two years. After that, Hepburn continued to work on international projects in countries including Indonesia, Nepal, and Pakistan. Following his retirement in 1994, he worked as a consultant for organizations including the World Bank and the Canadian International Development Agency. He also volunteered for the Canadian Executive Service Organization, a nonprofit that provides underserved communities worldwide with mentorship, coaching, and training in sectors such as alternative energy, forestry, and manufacturing. He volunteered on projects in Guatemala and Honduras. Markus Zahn Professor emeritus at MIT Life Fellow, 75; died 13 March Zahn was a professor of electrical engineering for 50 years. He taught at the University of Florida in Gainesville in 1970 and worked there for 10 years before joining MIT, where he spent the remainder of his career. He researched how electromagnetic fields interact with materials, and he developed a method for magnetically separating oil and water, as well as a system that detects buried dielectric, magnetic, and conducting devices such as land mines. He was director of MIT’s 6-A program, which provides undergraduate students with mentoring and internship opportunities. Zahn, who was granted more than 20 U.S. patents, worked as a consultant for Dow, Ford, Texas Instruments, and other companies He received bachelor’s, master’s, and doctoral degrees in engineering from MIT.
  • Baidu’s PaddlePaddle Spins AI up to Industrial Applications
    Aug 04, 2022 07:31 AM PDT
    TensorFlow, PyTorch, and Keras: Those three deep-learning frameworks have dominated AI for years even as newer entrants gain steam. But one framework you don’t hear much about in the West is China’s PaddlePaddle, the most popular Chinese framework in the world’s most populous country. It is an easy-to-use, efficient, flexible, and scalable deep-learning platform, originally developed by Baidu, the Chinese AI giant, to apply deep learning to many of its own products. Today, it is being used by more than 4.77 million developers and 180,000 enterprises globally. While comparable numbers are hard to come by for other frameworks, suffice to say, that’s big. Baidu recently announced new updates to PaddlePaddle, along with 10 large deep-learning models that span natural-language processing, vision, and computational biology. Among the models is a hundred-billion-parameter natural language processing (NLP) model called ERNIE 3.0 Zeus, a geography-and-language pretrained model called ERNIE-GeoL, and a pretrained model for compound representation learning called HELIX-GEM. The company has also created three new industry-focused large models—one for the electric power industry, one for banking, and another one for aerospace—by fine-tuning the company’s ERNIE 3.0 Titan model with industry data and expert knowledge in unsupervised learning tasks. Software frameworks are packages of associated support programs, compilers, code libraries, tool sets, and application programming interfaces (APIs) to enable development of a project or system. Deep-learning frameworks bring together everything needed to design, train, and validate deep neural networks through a high-level programming interface. Without these tools, implementing deep-learning algorithms would take a lot of time because otherwise reusable pieces of code would have to be written from scratch. Baidu started to develop such tools as early as 2012 within months of Geoffrey Hinton’s deep-learning breakthrough at the ImageNet competition. In 2013, a doctoral student at the University of California, Berkeley, created a framework called Caffe, that supported convolutional neural networks used in computer-vision research. Baidu built on Caffe to develop PaddlePaddle, which supported recurrent neural networks in addition to convolutional neural networks, giving it an advantage in the field of NLP. The name PaddlePaddle is derived from PArallel Distributed Deep Learning, a reference to the framework’s ability to train models on multiple GPUs. Google’s open-sourced TensorFlow in 2015 and Baidu open-sourced PaddlePaddle the next year. When Eric Schmidt introduced TensorFlow to China in 2017, it turns out China was ahead of him. While TensorFlow and Meta’s PyTorch, open-sourced in 2017, remain popular in China, PaddlePaddle is more oriented toward industrial users. “We dedicated a lot of effort to reducing the barriers to entry for individuals and companies,” said Ma Yanjun, general manager of the AI Technology Ecosystem at Baidu. PyTorch and TensorFlow require greater deep-learning expertise on the part of users compared to PaddlePaddle, whose toolkits are designed for nonexperts in production environments. “In China, many of the developers are trying to use AI in their work, but they do not have much AI background,” explained Ma. “So, to increase the use of AI in different industry sectors, we’ve provided PaddlePaddle with a lot of low-threshold toolkits that are easier to use so it can be used by a wider community.” AI engineers normally don’t know much about industry sectors and industry-sector experts don’t know much about AI. But PaddlePaddle’s easy-to-understand code comes with a wealth of learning materials and tools to help users. It scales easily and has a comprehensive set of APIs to address various needs. These developers used PaddlePaddle for a desert robot to automate the process of tree planting.Baidu It supports large-scale data training and can train hundreds of machines in parallel. It provides a neural-machine translation system, recommender systems, image classification, sentiment analysis, and semantic role labeling. Toolkits and libraries are the strong side of PaddlePaddle, Ma said. For example, PaddleSeg can be used for segmentation of images. PaddleDetection can be used for object detection. “We cover the whole pipeline of AI development from data processing to training, to model compression, to the adaptation to different hardware,” said Ma, “and then how to deploy them in different systems, for example, in Windows or in the Linux operating system or on an Intel chip or on an Nvidia chip.” The platform also hosts toolkits for cutting-edge research purposes, like Paddle Quantum for quantum-computing models and Paddle Graph Learning for graph-learning models. “That’s why PaddlePaddle is quite popular in China right now,” he said. “Developers are using such toolkits and not just the tool itself.” Since it was open-sourced, PaddlePaddle has evolved quickly to have better performance and user experience in different industry sectors outside Baidu as well as countries outside China thanks to extensive English-language documentation. Currently, PaddlePaddle offers over 500 algorithms and pretrained models to facilitate the rapid development of industrial applications. Baidu has worked to reduce model size so they can be deployed in real-world applications. Some of the models are very small and fast and can be deployed on a camera or cellphone. Industrial Applications for PaddlePaddle Transportation companies have been using PaddlePaddle to deploy AI models that monitor traffic lights and improve traffic efficiency. Manufacturing companies are using PaddlePaddle to improve productivity and lower costs. Recycling companies use PaddlePaddle to develop an object-detection models that can identify different types of garbage for a garbage-sorting robot. Shouguang county in Shandong province is deploying AI to monitor the growth of different vegetables, advising farmers the best time to pick and pack them. In Southeast Asia, PaddlePaddle has been used to control AI-powered forest drones for fire prevention. PaddlePaddle has parameter server technology to train sparse models that can be used in real-time recommender systems and search. But it has also merged models into even larger systems that are used for scenarios that don’t require real-time results, like text generation or image generation. Baidu sees large, dense models as another way of reducing the barrier to AI adoption because so-called foundation models can be adapted to specific scenarios. Without the foundation model, you need to develop everything from scratch. Ma said research areas are converging with cross-model learning of different modalities, like speech and vision. He said Baidu is also using knowledge graphs in the deep-learning process. “Previously a deep-learning system dealt with raw texts or raw images without any knowledge input and the system used self-supervised learning to gather rules outside the data,” Ma said. “But now we are seeing knowledge graphs as an input.”
  • The Radical Scope of Tesla’s Data Hoard
    Aug 03, 2022 02:27 PM PDT
    You won’t see a single Tesla cruising the glamorous beachfront in Beidaihe, China, this summer. Officials banned Elon Musk’s popular electric cars from the resort for two months while it hosts the Communist Party’s annual retreat, presumably fearing what their built-in cameras might capture and feed back to the United States. Back in Florida, Tesla recently faced a negligence lawsuit after two young men died in a fiery car crash while driving a Model S belonging to a father of one of the accident victims. As part of its defense, the company submitted a historical speed analysis showing that the car had been driven with a daily top speed averaging over 90 miles per hour (145 kilometers per hour) in the months before the crash. This information was quietly captured by the car and uploaded to Tesla’s servers. (A jury later found Tesla just 1 percent negligent in the case.) Meanwhile, every recent-model Tesla reportedly records a breadcrumb GPS trail of every trip it makes—and shares it with the company. While this data is supposedly anonymized, experts are skeptical. Alongside its advances in electric propulsion, Tesla’s innovations in data collection, analysis, and usage are transforming the automotive industry, and society itself, in ways that appear genuinely revolutionary. “Gateway log” files—periodically uploaded to Tesla—include seatbelt, Autopilot, and cruise-control settings, and whether drivers had their hands on the steering wheel. In a series of articles (story 2; story 3), IEEE Spectrum is examining exactly what data Tesla vehicles collect, how the company uses them to develop its automated driving systems, and whether owners or the company are in the driver’s seat when it comes to accessing and exploiting that data. There is no evidence that Tesla collects any data beyond what customers agree to in their terms of service—even though opting out of this completely appears to be very difficult. Almost every new production vehicle has a battery of sensors, including cameras and radars, that capture data about their drivers, other road users, and their surroundings. There is now a worldwide connected car-data industry, trading in anonymized vehicle, driver, and location data aggregated from billions of journeys made in tens of millions of vehicles from all the major automotive equipment manufacturers. But none seem to store that information and send it back to the manufacturer as regularly, or in such volume, or have been doing so for as long, as those made by Tesla. “As far as we know, Tesla vehicles collect the most amount of data,” says Francis Hoogendijk, a researcher at the Netherlands Forensic Institute who began investigating Tesla’s data systems after fatal crashes in the United States and the Netherlands in 2016. Spectrum has used expert analyses, NTSB crash investigations, NHTSA reports, and Tesla’s own documents to build up as complete a picture as possible of the data Tesla vehicles collect and what the company does with them. To start with, Teslas, like over 99 percent of new vehicles, have event data recorders (EDRs). These “black box” recorders are triggered by a crash and collect a scant 5 seconds of information, including speed, acceleration, brake use, steering input, and automatic brake and stability controls, to assist in crash investigations. But Tesla also makes a permanent record of these data—and many more—on a 4-gigabyte SD or 8-GB microSD card located in the car’s Media Control Unit (MCU) Linux infotainment computer. These time-stamped “gateway log” files also include seatbelt, Autopilot, and cruise-control settings, and whether drivers had their hands on the steering wheel. They are normally recorded at a relatively low resolution, such as 5 hertz, allowing the cards to store months’ or years’ worth of data, even up to the lifetime of the vehicle. Because the gateway logs use data from cars’ standard control area network (CAN) buses, they can include the unique vehicle identification number, or VIN. However, no evidence suggests these logs could include information from the car’s GPS module, or from its cameras or (for earlier models) radars. In a Florida court, Tesla presented detailed data about the top speeds of a Model S involved in a fatal crash.Car Engineering/Tesla/Southern District of Florida U.S. Courts When an owner connects a Tesla to a Wi-Fi network—for instance, to download an over-the-air update that adds new features or fixes bugs—the gateway log data is periodically uploaded to Tesla. Judging by Tesla’s use of gateway log data in the Florida lawsuit, Tesla appears to link that data to its originating vehicle and store it permanently. (Tesla did not respond to requests for clarification on this and other issues). Teslas also have a separate Autopilot Linux computer, which takes inputs from the cars’ cameras to handle driver-assistance functions like cruise control, lane-keeping, and collision warnings. If owners plug their own USB thumb drives into the car, they can make live dashcam recordings, and set up Sentry Mode to record the vehicle’s surroundings when parked. These recordings do not appear to be uploaded to Tesla. However, there are many occasions in which Tesla vehicles do store images and (in 2016 models onward) videos from the cameras, and then share them with the company. These Autopilot “snapshots” can span several minutes and consist of up to several hundred megabytes of data, according to one engineer and Tesla owner who has studied Tesla’s data-collection process using salvaged vehicles and components, and who tweets using the pseudonym Green. As well as visual data, the snapshots include high-resolution log data, similar to that captured in the gateway logs but at a much higher frequency—up to 50 Hz for wheel-speed information, notes Hoogendijk. Snapshots are triggered when the vehicle crashes—as detected by the airbag system deploying—or when certain conditions are met. These can include anything that Tesla engineers want to learn about, such as particular driving behaviors, or specific objects or situations being detected by the Autopilot system. (These matters will be covered in the second installment in our series, to be posted tomorrow.) GPS location data is always captured for crash events, says Green, and sometimes for other snapshots. Like gateway data, snapshots are uploaded to Tesla when the car connects to Wi-Fi, although those triggered by crashes will also attempt to upload over the car’s 4G cellular connection. Then, Green says, once a snapshot has been successfully uploaded, it is deleted from the Autopilot’s onboard 32-GB storage. In addition to the snapshots, the Autopilot computer also records a complete trip log every time a mid-2017 or later Tesla is shifted from Park to Drive, says Green. Trip logs include a GPS breadcrumb trail until the car is shifted back into Park and include speeds, road types, and when or whether Autopilot was activated. Green says that trip logs are recorded whether or not Autopilot (or Full Self-Driving) is used. Like the snapshots, trip logs are deleted from the vehicle after being uploaded to Tesla. But what happens to this treasure trove of data? Tesla has sold about three million vehicles worldwide, the majority of which are phoning home daily. They have provided the company with billions of miles of real-world driving data and GPS tracks, and many millions of photos and videos. What the world’s leading EV automaker is doing with all that data is the subject of our next installment. Update 5 Aug. 2022: Elon Musk announced this week that Tesla has now sold about three million vehicles worldwide (not two as we had originally reported).
  • Solar-to-Jet-Fuel System Readies for Takeoff
    Aug 03, 2022 10:00 AM PDT
    As climate change edges from crisis to emergency, the aviation sector looks set to miss its 2050 goal of net-zero emissions. In the five years preceding the pandemic, the top four U.S. airlines—American, Delta, Southwest, and United—saw a 15 percent increase in the use of jet fuel. Despite continual improvements in engine efficiencies, that number is projected to keep rising. A glimmer of hope, however, comes from solar fuels. For the first time, scientists and engineers at the Swiss Federal Institute of Technology (ETH) in Zurich have reported a successful demonstration of an integrated fuel-production plant for solar kerosene. Using concentrated solar energy, they were able to produce kerosene from water vapor and carbon dioxide directly from air. Fuel thus produced is a drop-in alternative to fossil-derived fuels and can be used with existing storage and distribution infrastructures, and engines. Fuels derived from synthesis gas (or syngas)—an intermediate product that is a specific mixture of carbon monoxide and hydrogen—is a known alternative to conventional, fossil-derived fuels. Syngas is produced by Fischer-Tropsch (FT) synthesis, in which chemical reactions convert carbon monoxide and water vapor into hydrocarbons. The team of researchers at ETH found that a solar-driven thermochemical method to split water and carbon dioxide using a metal oxide redox cycle can produce renewable syngas. They demonstrated the process in a rooftop solar refinery at the ETH Machine Laboratory in 2019. Reticulated porous structure made of ceria used in the solar reactor to thermochemically split CO2 and H2O and produce syngas, a specific mixture of H2 and CO.ETH Zurich The current pilot-scale solar tower plant was set up at the IMDEA Energy Institute in Spain. It scales up the solar reactor of the 2019 experiment by a factor of 10, says Aldo Steinfeld, an engineering professor at ETH who led the study. The fuel plant brings together three subsystems—the solar tower concentrating facility, solar reactor, and gas-to-liquid unit. First, a heliostat field made of mirrors that rotate to follow the sun concentrates solar irradiation into a reactor mounted on top of the tower. The reactor is a cavity receiver lined with reticulated porous ceramic structures made of ceria (or cerium(IV) oxide). Within the reactor, the concentrated sunlight creates a high-temperature environment of about 1,500 °C which is hot enough to split captured carbon dioxide and water from the atmosphere to produce syngas. Finally, the syngas is processed to kerosene in the gas-to-liquid unit. A centralized control room operates the whole system. Fuel produced using this method closes the fuel carbon cycle as it only produces as much carbon dioxide as has gone into its manufacture. “The present pilot fuel plant is still a demonstration facility for research purposes,” says Steinfeld, “but it is a fully integrated plant and uses a solar-tower configuration at a scale that is relevant for industrial implementation.” “The solar reactor produced syngas with selectivity, purity, and quality suitable for FT synthesis,” the authors noted in their paper. They also reported good material stability for multiple consecutive cycles. They observed a value of 4.1 percent solar-to-syngas energy efficiency, which Steinfeld says is a record value for thermochemical fuel production, even though better efficiencies are required to make the technology economically competitive. A heliostat field concentrates solar radiation onto a solar reactor mounted on top of the solar tower. The solar reactor cosplits water and carbon dioxide and produces a mixture of molecular hydrogen and carbon monoxide, which in turn is processed to drop-in fuels such as kerosene.ETH Zurich “The measured value of energy conversion efficiency was obtained without any implementation of heat recovery,” he says. The heat rejected during the redox cycle of the reactor accounted for more than 50 percent of the solar-energy input. “This fraction can be partially recovered via thermocline heat storage. Thermodynamic analyses indicate that sensible heat recovery could potentially boost the energy efficiency to values exceeding 20 percent.” To do so, more work is needed to optimize the ceramic structures lining the reactor, something the ETH team is actively working on, by looking at 3D-printed structures for improved volumetric radiative absorption. “In addition, alternative material compositions, that is, perovskites or aluminates, may yield improved redox capacity, and consequently higher specific fuel output per mass of redox material,” Steinfeld adds. The next challenge for the researchers, he says, is the scale-up of their technology for higher solar-radiative power inputs, possibly using an array of solar cavity-receiver modules on top of the solar tower. To bring solar kerosene into the market, Steinfeld envisages a quota-based system. “Airlines and airports would be required to have a minimum share of sustainable aviation fuels in the total volume of jet fuel that they put in their aircraft,” he says. This is possible as solar kerosene can be mixed with fossil-based kerosene. This would start out small, as little as 1 or 2 percent, which would raise the total fuel costs at first, though minimally—adding “only a few euros to the cost of a typical flight,” as Steinfeld puts it Meanwhile, rising quotas would lead to investment, and to falling costs, eventually replacing fossil-derived kerosene with solar kerosene. “By the time solar jet fuel reaches 10 to 15 percent of the total jet-fuel volume, we ought to see the costs for solar kerosene nearing those of fossil-derived kerosene,” he adds. However, we may not have to wait too long for flights to operate solely on solar fuel. A commercial spin-off of Steinfeld’s laboratory, Synhelion, is working on commissioning the first industrial-scale solar fuel plant in 2023. The company has also collaborated with the airline SWISS to conduct a flight solely using its solar kerosene.
  • Why Studying Bats Might Yield Insights into Human Life Extension
    Aug 03, 2022 06:12 AM PDT
    Few fields of endeavor have advanced as swiftly as bioinformatics over the past couple of decades. Just 25 years ago, the human genome was still largely a mystery. Then, in 2003, the first sequence was announced, of about 92 percent of a human genome. That sequence cost some US $300 million dollars. Over the years, as the technology became more advanced and pervasive, the cost of sequencing declined. Nowadays, it’s possible to get a sequence for well under $1,000. This price drop has triggered a revolution in the ability of doctors to identify a patient’s susceptibility to disease and also to prescribe effective treatments. Once the genome was sequenced, the enormous task of identifying the function of the many genes began. Most estimates of the number of protein-coding genes in the human genome are now in the range of 19,000 to 21,000, although some are considerably higher. And as many as a quarter of these genes remain of largely uncertain function. The most powerful software-based tool for researchers trying to understand the function of these many genes is a system called BLAST, which stands for Basic Local Alignment Search Tool. Here’s how it works. Let’s say a team of research biologists has come across a rhesus monkey gene that they can’t identify. They can enter into BLAST the nucleotides of the DNA or the amino-acid sequences of the protein associated with the gene. BLAST then searches enormous databases to find similar genes within the genomes of countless creatures, including humans. A match to a known gene often enables the researchers to infer the function of the unknown gene. It also lets them infer functional and evolutionary relationships that might exist between the sequences, and locate the unknown gene within one or more families of related genes. First released in 1990, BLAST was created by a group at the U.S. National Institutes of Health that included Eugene Myers, Webb Miller, Stephen Altschul, Warren Gish, and David Lipman. Their 1990 paper describing BLAST has more than 75,000 citations, making it one of the most highly cited research papers of all time. Earlier this year, Myers and Miller received the IEEE Francis E. Allen Medal, which honors “innovative work in computing leading to lasting impact on other aspects of engineering, science, technology, or society.” Shortly before the ceremony, IEEE Spectrum spoke with Myers, who had just retired as a director of the Max Planck Institute of Molecular Cell Biology and Genetics. Eugene Myers on… The origins of the BLAST system Will it ever be possible for humans to live hundreds of years? Why we should resurrect extinct creatures The next big challenge for bioinformatics It’s the mid-to-late 1980s at the U.S. National Institutes of Health. What was in the air? What were some of the motivating factors that led you and your colleagues there to work on and, ultimately, complete, BLAST? Eugene Myers Eugene Myers: Well, there was already a tool like BLAST for searching the database, but it wasn’t very efficient and it took much too long. And David Lipman, who was running the National Center for Biotechnology Information (NCBI), that growing database, was looking for something faster. And I happened to be on sabbatical. And I was a smoker at the time, and I was downstairs and he brought me this article about this new hot chip that was being promoted by TRW. And I’m sitting there smoking my cigarettes saying, “Oh, David, I don’t believe in ASICs. I think if we just write the right code, we can do something.” And I had actually been working on a technique, a theoretical technique, for sublinear search. And I mean, basically, David and I and Webb got together and we had a very quick series of exchanges where we basically took the theoretical idea and distilled it down to its essence. And it was really fun, actually. I mean, Webb and I were passing back and forth versions of code, trying different implementations. And that was it. And I need to say, we got something that was fast as greased lightning at the time. Back to top Do you remember what the chip was? Myers: I think it was called the FDF, and it was a systolic-array chip. It was designed for pattern matching primarily for the intelligence agencies. [Editor’s note: the Fast Data Finder (FDF) was an ASIC for recognizing strings of text. It was created at TRW in the mid-1980s.] Ah, intrigue. So that leads us to the next question, which is, for those who aren’t biologists, what exactly does BLAST do? It’s been called a sort of a search engine for genes. So a biologist who is doing a sequence, say, of a genome has a piece of genetic material that’s presumably a gene and doesn’t know what this gene does. Myers: Well, I mean, basically, BLAST takes a DNA sequence or protein sequence, which is just a code over some alphabet, and it goes off and it searches the database looking for something that looks like that sequence. In biology, sequences aren’t preserved exactly. So you’re not looking for exactly the same sequence. You’re looking for something that’s like it. A few of the symbols can be different, maybe one can be missing, there could be an extra one. So it’s called approximate match. And when you say it goes off and finds them, it finds them from a catalog of the genomes and genetic material of all living creatures that have been recorded. Myers: Yes. The database is oftentimes preprocessed to accelerate the search, although the initial BLAST, basically, just streamed the entire database. So it will find a close-as-possible match for whatever the sequences you have, which may be a gene, and it will find it and it might be a totally different creature… Myers: It could potentially find many of them. And one of the important things about BLAST, actually, which Altschul contributed, was it actually gave you the probability that you would see that match by chance. Because one of the big problems prior to that is that people were taking things that they thought kind of looked the same and saying, “Well, here’s an interesting match,” when in fact, according to probability theory, that was not an interesting match at all. So one of the very nice things about BLAST is it gave you a P-value that told you whether or not your match was actually interesting or not. But it would actually give you a whole list of matches and rank them according to their probability. So one of the things that this illustrates is that all of us creatures on Earth, all of us, we’re made up of genes, and not only are we made up of genes, but you see throughout all of the living creatures very similar genes. So the blueprint, if you will, the elements of the blueprint that make up a human are different, but remarkably similar to the ones that, say, make up a parakeet or a lizard. Myers: Now, there was a huge diaspora of life about 500 million years ago from bacteria into multicellular creatures where we basically ended up with fish and insects and all of the more complex orders of life. And they, basically, all used the same genes or proteins, but they used them in different ways. And mostly what was going on was the way that those genes were being turned on and the way those cassettes were being run. I mean, for example, a fruit fly has 14,000 genes and a human being has, I don’t know, maybe 28,000. And basically, every gene that’s in the fruit fly, there’s an analog that’s in a human being. Human beings have more copies of particular genes. They have one or two of something instead of just one of them. And human beings have a lot more genes that turn things on and off selectively. In other words, that regulate how the genes are being used. But the actual repertoire of genes is very similar. When we sequenced the human genome back at the turn of the century, 2000, we looked at the fruit fly and we looked at a human, and we said, “Hey, the fruit fly is like a little human.” I mean, potentially it gets cancer, metabolic disorders. It’s really quite fascinating. There are some very large-scale projects around the world now aimed at sequencing the genomes of enormous classes of creatures, such as, vertebrates or plants or all living things native to the British Isles. These initiatives are sometimes collectively referred to as “sequencing the world.” Why are these efforts important? Myers: Well, that’s a complex question. The basic answer is that we’re starting to do it now because we can finally do it at a quality where we feel like these libraries of sequences that we produce are going to, basically, stand the test of time—that they’re sufficiently correct and accurate. And the fascinating thing is, we’re going to learn more about how the various genes function. See, there’s still a lot of questions about what these genes are doing. And we’re going to learn more about how they function by looking at how they’re working across all of life than by looking at a particular species. I mean, right now, most medicine is just focused on human beings. For example, we’re interested in how long a human being lives. We’d like to live longer. But absent disease, the variation and the longevity of human beings is about 10 percent. I mean, some of us expire at 85, some of us at 95, and some of us at 75. It’s not a very big range. But for example, there are bats that as a function of their body weight live 50 times longer than they’re supposed to. Fifty times. That’s like living to 5,000 for a human being. And there are other bats that are very closely related to that bat—only 5 million years of evolution between them—where the bat lives a normal life. So if you go out into nature, you’re going to see these extremes in physical characteristics of what we call phenotype. So what we are interested in is what’s the relationship between the genotype, which is the gene sequence, and all the genes that are in it, and the phenotype, which is the physical characterization or manifestation of the creature. So in other words, one of the things you want to do is you want to know what the cluster of genes is that enables certain bats to live 50 times longer than other bats? Myers: Yes. So we think that by sequencing lots of pairs of bats that are short- and long-lived and comparing their genomes, we’re going to get real clues about what it takes to have a creature live a long time. And presumably, because the genes in a human being are so similar to those in the bat, it will translate to human medicine. There is a study of so-called supercentenarians among human beings, if I’m not mistaken. So this would presumably provide additional depth and information beyond just studying supercentenarians. Supercentenarians are people who live to be about 100 without substantial decline, either mentally or physically. Myers: A lot of that is about lifestyle. I mean, they’ve done studies, the Blue Zones. And it’s about having good friends, it’s about eating a healthy diet, not eating too much, getting a little exercise, not too much stress. A lot of these things, I mean, turn out to be very significant factors. But again, there’s basically a kind of an expire-by date for every species of creature, and they have a longevity. Because the original purpose, really, of a creature is to create children. And once you’ve created the children, your job’s done. I mean, once you’ve created offspring, you’ve propagated the genome and you’re superfluous. We’ve got this natural built-in expiry date. And the question is, how can we fundamentally change that? So we’ve got this natural kind of built-in expiry date. And the question is, how can we fundamentally change that? Because I don’t want to live to be 100. I want to live to be 1,000, okay? I mean, it’s too late for me. But think about it. If I could live to be 1,000, I could have 10 careers. I mean, I’d love to do 100 years as an architect, 100 years as a physician. Right? Back to top So the idea is if you could identify the genes and the sequences that these long-lived creatures have in common, not only humans, but other creatures, you could, in theory, use a gene-editing technique, something that follows from CRISPR in the far future, to actually edit genes? This is probably decades from now. Myers: Well, it could be just as simple as stopping certain reactions from happening. So it may not even be as much as a [genome] edit. I mean, it may just be like a drug where basically we just inhibit certain pathways. We build a small molecule that inhibits something to stop it from doing its thing, and that turns off the expiry clock. But we don’t know exactly how to do that yet. I mean, we know that reducing inflammation certainly leads to longer life. We know that not eating as much. So maybe there’s a drug that we can take that helps us metabolize better so that we don’t—so there are a lot of options like that. It doesn’t necessarily have to be gene editing. This is a kind of a futuristic thing. I can’t tell you when, but I can tell you that as long as we don’t blow ourselves up to kingdom come or ruin our planet and we have enough time, we will do it. We will do it. One of the main motivations, perhaps the greatest motivation for all of this work, is to better understand how specific genetic variations lead to disease. It’s a lot of what keeps the money flowing and the whole enterprise going. And a very powerful tool for this purpose is the genome-wide association study. And this predates a lot of this technology. It’s an older tool, but it is one that is as dependent as ever on bioinformatics. And I would think because of the growing complexity, only getting more dependent. Myers: A lot of what we’ve been trying to do for the last couple of decades is basically correlative. In other words, we’re not looking actually for causation. We’re just simply looking for correlation. This gene seems to have something to do with this disease and vice versa. And it doesn’t give us a mechanism, but it does tell us that this is associated with that. So we want to understand. A lot of what we’ve been doing is sequencing lots and lots of people. In other words, getting their genotype, their genome, and correlating that with their phenotype, with their physical characteristics. Do they get heart disease early? Do they get diabetes? A classic one is breast cancer with the BRCA. Myers: Right. And that was an example where we found basically the genes that are absolutely correlated with breast cancer. I mean, we know there’s a fairly small repertoire. But on the other hand, something like coronary health, heart health, is very, very complicated because really it’s a function of hundreds of genes. And so which combination and which battery? So basically, it’s not a single locus. I mean, early on, in the very early days, there were a lot of diseases that were caused by single mutation, but those are kind of the exception rather than the rule. I mean, those single mutations, they were incredibly serious diseases. And it’s nice that—well, I think we’re in a position to affect some of those. It’s very interesting to have these single-locus diseases in hand to really improve the health of humanity as a whole. We’re going to need to have a kind of more refined understanding of the relationship between the genotype and the phenotype. And so these studies have been going on and people have been collecting data. In fact, the biggest problem, actually, isn’t getting everybody’s genome. The biggest problem is getting accurate phenotypic data. In other words, actually getting accurate measurements of people’s blood sugar. Like, when do you take the test, etc. I won’t go into all the complexities. But it’s actually building a database of all of the characteristics of people and basically digitizing all of the information we have about people. But this is going forward, and I think it will be very useful. Back to top One of the more sensational applications of bioinformatics is the challenge of reviving extinct species. So we read about the woolly mammoth, and there’s recent talk about the dodo and others. There’s the quagga, I think. There’s just a whole host of creatures that have, sadly, departed from the earth, but that in theory, we could revive in some form with the techniques and tools now available. Myers: I think probably what’s more interesting is not actually bringing them back, but understanding what they were. For example, Svante Pääbo’s work reconstructing the Neanderthal sequence. Okay. I mean, it turns out that we’re all about, I think, 4 percent Neanderthal DNA. And it turns out, for example with COVID, it turns out that your propensity for outcomes in COVID actually is correlated with whether or not you had some of this Neanderthal DNA. I think it’s quite fascinating that we’re kind of an admixture of these things. So knowing this ancient genome is quite interesting. I mean, also, the woolly mammoth versus the modern-day elephant basically gives us another clue. And I think what’s fascinating is the fact that we can do it at all. If we can get sufficient DNA material, then we can extract these things. Understanding that the evolutionary history of mankind is certainly of interest because we’re interested in ourselves, yes? For other creatures, well, it is the case that if we have a sequence, I do believe that we will eventually be able to kind of realize Jurassic Park and actually literally create the genomic sequence, transplant it into a nonfertilized embryo of a nearby species, and create the creature, an instance of the creature. And I think that will be pretty cool if we really want to understand dodo birds. But I think in general, we don’t want to lose all of that diversity. That connects back to what we were talking about before, which are these projects to go out and sequence the world. For example, I’ve sequenced some nearly extinct turtles. Now, that I have the sequence of those turtles, even if they go extinct, we can still do a Jurassic Park sometime in the future, but at least the genetic inheritance of those species is still present and we will still have it. So it’s, basically, a matter of conservation and a matter of understanding evolution and it’s pretty damn cool. Back to top The last thylacine died at the Beaumaris Zoo in Hobart, Australia, on 7 September 1936. Recently, biologists at the University of Melbourne launched a project aimed at bringing the creature back from extinction.Hum Historical/Alamy And for full disclosure, we should point out that nobody could actually do Jurassic Park because dinosaur DNA is tens of millions of years old, so it no longer exists. Myers: Yeah. I don’t mean Jurassic Park in the sense of bringing back dinosaurs. Jurassic Park in the sense of creating creatures that are no longer extant. Okay? I mean, that’s always the case with the best science fiction, is that it’s plausible. Jurassic Park is plausible, so is Gattaca. You know that one with Ethan Hawke where they, basically, sequenced everybody and they took the best? I mean, that is completely plausible. What do you think are some of the most exciting challenges for young people, that they’ll be working on, say, in two years or four years? The big, difficult problems in bioinformatics. Myers: Well, there are a lot of problems that still haven’t been solved. For example, how do you get a given shape and form from a genome? The genome actually encodes everything. It gives you five fingers. It gives you a nose, eyes. It encodes for everything. But we don’t understand the biophysical process for that. I mean, we have some idea that this gene controls that and this gene controls that, but that doesn’t tell us mechanistically what’s happening, and it doesn’t tell us how to intervene or what would happen if we intervene. So I still think that the fundamental question is to try to understand kind of what’s encoded in a genome and what mechanistically does it unfold. And I mean, computational biology is going to be at the core of it because, I mean, you’re talking about, okay, for a human being, 30,000 genes. Does 30,000 genes probably get transcribed into 150,000 different protein variants? There are probably 10 billion of those proteins floating around an individual cell. And then your body—I mean, your brain alone has 10 billion neurons. So think about the scale of that thing. Okay? I mean, we’re not even close. So I think that high-performance computing. I think that advanced simulations. A lot of what moves biology is technology, the ways to manipulate things. We’ve been able to manipulate creatures for a long time genetically. But now that we have this new mechanism, CRISPR-Cas, for which the Nobel was awarded a couple of years ago, I mean, we can now do that with precision and fidelity, which is a huge advance.
  • Proposed Amendment to the IEEE Constitution on the Ballot
    Aug 02, 2022 11:00 AM PDT
    At its November 2021 meeting, the IEEE Board of Directors voted to propose an amendment to the IEEE Constitution, Article XIV—Amendments. The proposed amendment seeks to ensure that any member-initiated petition proposal to amend the IEEE Constitution receives participation from all geographic regions. Supporting and Opposing Statements Members have been discussing and sharing their views on the proposed amendment on the IEEE Collabratec discussion forum. To ensure a collaborative environment, the forum is moderated by the IEEE Election Oversight Committee. These two statements, along with a link to the discussion forum, will accompany the proposed amendment on the IEEE Annual Election Ballot. In Support Statement: The proposed amendment reflects changes in IEEE membership and in the emergence of global electronic communications. In 1963 IEEE had 150,000 members, 93 percent of which were in today’s Regions 1-6 in the U.S., and electronic communications were almost non-existent. Today IEEE has over 400,000 members, with approximately 1/3 in Regions 1-6, 1/3 in Region 10, and 1/3 in Regions 7, 8, and 9, and global, personal electronic communication is instant and ubiquitous. Given the evolution of global membership distribution and the enablement of electronic communications and IEEE electronic petitions, IEEE is today able to ensure equity across all Regions and encourage deeper membership engagement through the requirement that every Region have a voice in proposed constitutional changes, without the great burden of collecting paper signatures. The proposed amendment ensures that future proposed constitutional changes reflect global member interests in a global organization, encourages professional collaboration across regional boundaries, and results in a more balanced, equitable, and scalable IEEE that will keep pace with change, both in global membership and relevant technologies, and continue to support the careers and technical lives of our members. In Opposition Statement: The IEEE Board of Directors (BoD) is correct to assert that “IEEE’s Constitution is IEEE’s highest-level governance document that affects all Members and rarely should change.” As such, no change should be made unless substantial evidence is provided that the Constitution has a critical flaw that needs fixed. Such evidence was not provided, and so the proposed amendment should be rejected. No evidence was provided that the current Constitution is resulting in a large number of member-initiated petitions. If such petitions are currently rare, then the proposed amendment to make the requirements much more onerous will make it impossible for a member to bring an amendment to the ballot. The proposed amendment will require collecting over 3,000 signatures, which is nearly impossible given the time required to locate, contact, communicate with, and discuss with so many members. The proposed 0.333 percent requirement per region will make member-initiated petitions impossible, given the broad geographical locations and the wide variety of languages spanned in the countries covered in the regions. To place an amendment on the ballot would require such an extensive investment in time and finances that only the very rich would have a chance of bringing an amendment to the ballot. Members can participate in the forum until it closes at 11:59 EDT on 14 August. It will become accessible to members in “read only” mode when balloting for the 2022 IEEE Annual Election begins on 15 August. More Information on the 2022 Election To adopt this amendment, an affirmative vote of at least two-thirds of all ballots cast is required, provided that the total number of those voting is at least 10 percent of all IEEE members who are eligible to vote on record as of 30 June 2022. For more details on the proposed amendment, the process for proposed amendment petitions, and election deadlines, visit the IEEE elections website.
  • Wall-Climbing Robot Shelves to Keep You Organized
    Aug 02, 2022 10:30 AM PDT
    I don’t know about you, but being stuck at home during the pandemic made me realize two things. Thing the first: My entire life is disorganized. And thing the second: Organizing my life, and then keeping organized, is a pain in the butt. This is especially true for those of us stuck in apartments that are a bit smaller than we’d like them to be. With space at a premium, Mengni Zhang, a Ph.D. student at Cornell’s Architectural Robotics Lab, looked beyond floor space. Zhang wants to take advantage of wall space—even if it’s not easily reachable—using a small swarm of robot shelves that offer semiautonomous storage on demand. During the pandemic I saw an increased number of articles advising people to clean up and declutter at home. We know the health benefits of maintaining an organized lifestyle, yet I could not find many empirical studies on understanding organizational behaviors, or examples of domestic robotic organizers for older adults or users with mobility impairments. There are already many assistive technologies, but most are floor based, which may not work so well for people living in small urban apartments. So, I tried to focus more on indoor wall-climbing robots, sort of like Roomba but on the wall. The main goal was to quickly build a series of functional prototypes (here I call them SORT, which stands for “Self-Organizing Robot Team”) to conduct user studies to understand different people's preferences and perceptions toward this organizer concept. By helping people declutter and rearrange personal items on walls and delivering them to users as needed or wanted while providing some ambient interactions, I’m hoping to use these robots to improve quality of life and enhance our home environments. This idea of intelligent architecture is a compelling one, I think—it’s sort of like the Internet of Things, except with an actuated physical embodiment that makes it more useful. Personally, I like to imagine hanging a coat on one of these little dudes and having it whisked up out of the way, or maybe they could even handle my bike, if enough of them work together. As Zhang points out, this concept could be especially useful for folks with disabilities who need additional workspace flexibility. Besides just object handling, it’s easy to imagine these little robots operating as displays, as artwork, as sun-chasing planters, lights, speakers, or anything else. It’s just a basic proof of concept at the moment, and one that does require a fair amount of infrastructure to function in its current incarnation (namely, ferrous walls), but I certainly appreciate the researcher’s optimism in suggesting that “wall-climbing robots like the ones we present might become a next ‘killer app’ in robotics, providing assistance and improving life quality.”
  • What V2G Tells Us About EVs and the Grid
    Aug 01, 2022 11:57 AM PDT
    As the number of electric vehicles on the world’s roads explodes, electric utilities are grappling with increasing demand while simultaneously having to stabilize their grids where more intermittent renewable-energy sources like wind and solar are coming online. For utilities looking for ways to store power for later use, all those shiny new EVs might look like rolling batteries that they can not only charge but also draw power from when demand exceeds supply. That’s the promise of vehicle-to-grid (V2G) technology. While China accounted for about half the 6.75 million EVs sold worldwide in 2021, according to Sweden-based analysts EV Volumes, Europe also showed strong growth. There, sales of EVs as part of the overall automobile market rose from 10 percent in 2020 to 17 percent in 2021, with 2.3 million sold. And it is in Europe where we find one of the largest V2G deployments. Longtime IEEE Spectrum contributor Michael Dumiak, who is based in Germany, ventured over to Utrecht in the Netherlands to report on the city’s ambitious V2G project. To meet the Dutch government’s mandate for all new cars to be zero-emissions by 2030, municipalities like Utrecht as well as utilities and private-sector partners will have to work together to locate new bidirectional charging stations that won’t overload transformers. When discussing the scope of change in the proverbial pipeline, renewables plus EVs plus grids in need of upgrades to handle the millions of new EVs projected to hit Europe’s roads in the next few years, one Dutch researcher told Dumiak, “Our grid was not designed for this.” Nor was grid-scale storage top of mind among the pioneers of V2G, at least not at the beginning. In the online-only feature story (coming soon), technology historian Matthew Eisler of the University of Strathclyde in Glasgow points out that V2G technology was originally conceived as vehicle-to-home, not vehicle-to-grid. Eisler’s piece charts the history of V2G and tells the story of the California company AC Propulsion. He documents how engineers Wally Rippel, Alan Cocconi, and Paul Carosa founded AC Propulsion in the early 1990s and produced a two-seater sports car called the Tzero, which featured bidirectional charging capability. As Eisler points out, this feature had been implemented to give drivers the ability, in an emergency, to charge another EV. Now that ability is being extended to the grid, raising a host of new questions. “And how will batteries with chemistries designed for the EV duty cycle perform in stationary power applications? Will V2G degrade such batteries and reduce their value in transportation? Those questions are far from resolved and yet are key to the success of bidirectional vehicle power,” Eisler writes, adding that many carmakers don’t yet have sufficient incentive to equip their cars for bidirectional power. And even if the auto industry does eventually jump on the bidirectional bandwagon, Eisler says that “it is not yet clear whether batteries designed for the duty cycle of the electric vehicle will prove suitable” for grid storage. The tensions that exist at the interface of these two massive sectors–power and transportation–threaten to hobble the EV market as Spectrum contributing editor and renowned risk analyst Robert N. Charette noted when I talked to him recently about a series of articles he’s working on that focuses on the risks inherent in mass vehicular electrification. While engineers are acutely aware of the enormous impediments blocking the road to a cleaner EV future, Charette believes that politicians and government officials have become detached from the expensive realities involved in retooling the electric-power infrastructure to accommodate tens of millions of new EVs. We hope policymakers take heed of Charette’s warnings about the difficulties ahead, which will appear online starting next month.
  • Navigating the Great Resignation and Changing Client Demands
    Aug 01, 2022 07:47 AM PDT
    With IP law firms under increasing pressure to meet client expectations faster and more efficiently, many practices are turning to creative workflow solutions and new staffing models. Register now for this free webinar. Join us for the upcoming webinar, Driving IP Law firm growth amidst staffing and market challenges, as our in-house experts, with combined 40+ years of industry knowledge, share key learnings from the experiences of our IP law firm customers, including the considerations for getting it right and ensuring quality outcomes. Topics that will be covered: Finding a right-fit resourcing balance: when to outsource vs. keeping in-house Common pitfalls to avoid Key learnings and strategies for firms to manage resourcing successfully Register now for this free webinar.
  • The Unsung Inventor Who Chased the LED Rainbow
    Jul 31, 2022 08:00 AM PDT
    Walk through half a football field’s worth of low partitions, filing cabinets, and desks. Note the curved mirrors hanging from the ceiling, the better to view the maze of engineers, technicians, and support staff of the development laboratory. Shrug when you spot the plastic taped over a few of the mirrors to obstruct that view. Go to the heart of this labyrinth and there find M. George Craford, R&D manager for the optoelectronics division of Hewlett-Packard Co., San Jose, Calif. Sitting in his shirtsleeves at an industrial beige metal desk piled with papers, amid dented bookcases, gym bag in the corner, he does not look like anybody’s definition of a star engineer. Appearances are deceiving. This article was first published as “M. George Craford.” It appeared in the February 1995 issue of IEEE Spectrum. A PDF version is available on IEEE Xplore. The photographs appeared in the original print version. “Take a look around during the next few days,” advised Nick Holonyak Jr., the John Bardeen professor of electrical and computer engineering and physics at the University of Illinois, Urbana, and the creator of the first LEDs. “Every yellow light-emitting diode you see—that’s George’s work.” Holonyak sees Craford as an iceberg—showing a small tip but leaving an amazing breadth and depth unseen. Indeed, Craford does prove to be full of surprises—the gym bag, for example. He skips lunch for workouts in HP’s basement gym, he said, to get in shape for his next adventure, whatever that might be. His latest was climbing the Grand Teton; others have ranged from parachute jumping to whitewater canoeing. His biggest adventure, though, has been some 30 years of research into light-emitting diodes. The call of space When Craford began his education for a technical career, in the 1950s, LEDs had yet to be invented. It was the adventure of outer space that called to him. The Iowa farm boy was introduced to science by Illa Podendorf, an author of children’s science books and a family friend who kept the young Craford supplied with texts that suited his interests. He dabbled in astronomy and became a member of the American Association of Variable Star Observers. He built rockets. He performed chemistry experiments, one time, he recalls with glee, generating an explosion that cracked a window in his home laboratory. When the time came, in 1957, to pick a college and a major, he decided to pursue space science, and selected the University of Iowa, in Iowa City, because space pioneer James Van Allen was a physics professor there. Vital statistics Name Magnus George Craford Date of birth Dec. 29, 1938 Birthplace Sioux City, Iowa Height 185 cm Family Wife, Carol; two adult sons, David and Stephen Education BA in physics, University of Iowa, 1961; MS and PhD in physics, University of Illinois, 1963 and 1967 First job Weeding soybean fields First electronics job Analyzing satellite data from space Patents About 10 People most respected Explorer and adventurer Sir Richard Burton, photographer Galen Rowell, Nobel Prize winner John Bardeen, LED pioneer Nick Holonyak Jr. Most recent book read The Charm School Favorite book Day of the Jackal Favorite periodicals Scientific American, Sports Illustrated, National Geographic, Business Week Favorite music String quartets Favorite composers Mozart, Beethoven Computer “I don’t use one” Favorite TV show “NYPD Blue” Favorite food Thai, Chinese Favorite restaurant Dining room at San Francisco’s Ritz Carlton Hotel Favorite movies Bridge on the River Kwai, Butch Cassidy and the Sundance Kid, The Lion in Winter Leisure activity Hiking, walking, snow skiing, bicycling, tennis, and, most recently, technical mountain climbing. Car Sable Wagon (a company car) Pet peeves “People that work for me who don’t come to me with little problems, which fester and turn into big ones.” Organizational membership IEEE, Society for Information Display Favorite awards National Academy of Engineering, IEEE Fellow, IEEE Morris N. Liebmann Memorial Award; but “everything you do is a team thing, so I have mixed feelings about awards.” As the space race heated up, Craford’s interest in space science waned, in spite of a summer job analyzing data returned from the first satellites. He had learned a bit about semiconductors, an emerging field, and Van Allen pointed him toward the solid-state physics program at the University of Illinois, where Craford studied first for a master’s degree, then a PhD. The glowing Dewar For his doctoral thesis, Craford began investigating tunneling effects in Josephson junctions. He had invested several years in that research when Holonyak, a pioneer in visible lasers and light-emitting diodes, left his position at General Electric Co. and joined the Illinois faculty. Craford met him at a seminar, where Holonyak was explaining his work in LEDs. Recalled Craford: “He had a little LED—just a red speck—and he plunged it into a Dewar of liquid nitrogen, and it lit up the whole flask with a bright red light.” Entranced, Craford immediately spoke to his thesis adviser about switching, a fairly unusual proposal, since it involved dropping years of work. “My thesis adviser was good about it; he had been spending less time around the lab lately, and Holonyak was building up a group, so he was willing to take me on.” Craford believes he persuaded the laser pioneer to accept him, the senior man recalls things differently. Craford’s adviser “was running for U.S. Congress,” Holonyak said, “and he told me, ‘I’ve got this good student, but I’m busy with politics, and everything we do someone publishes ahead of me. I can’t take good care of him. I’d like you to pick him up.”’ However it happened, Craford’s career path was finally set—and the lure of the glowing red Dewar never dimmed. Holonyak was growing gallium arsenide phosphide and using it successfully to get bright LEDs and lasers. He assigned his new advisee the job of borrowing some high-pressure equipment for experiments with the material. After finding a professor with a pressure chamber he was willing to lend, Craford set up work in the basement of the materials research building. He would carry GaAsP samples from the lab to the materials research basement, cool them in liquid nitrogen, increase the pressure to study the variation of resistivity, and see unexpected effects. “Just cooling some samples would cause the resistance to go up several times. But add pressure, and they would go up several orders of magnitude,” Craford said. “We couldn’t figure out why.” Eventually, Craford and a co-worker, Greg Stillman, determined that variations in resistance were related not only to pressure but also to light shining on the samples. “When you cooled a sample and then shone the light on it, the resistance went down—way down—and stayed that way for hours or days as long as the sample was kept at low temperature, an effect called persistent photoconductivity.” Further research showed that it occurred in samples doped with sulfur but not tellurium. Craford and Stillman each had enough material for a thesis and for a paper published in the Physical Review. The phenomenon seemed to have little practical use, and Craford put it out of his mind, until several years later when researchers at Bell Laboratories found it in gallium aluminum arsenide. “Bell Labs called it the DX Center, which was catchy, studied it intensively, and over time, many papers have been published on it by various groups,” Craford said. Holonyak’s group’s contribution was largely forgotten. “He doesn’t promote himself,” Holonyak said of Craford, “and sometimes this troubles me about George; I’d like to get him to be more forward about the fact that he has done something.” Move to Monsanto After receiving his PhD, Craford had several job offers. The most interesting were from Bell Laboratories and the Monsanto Co. Both were working on LEDs, but Monsanto researchers were focusing on gallium arsenide phosphide, Bell researchers on gallium phosphide. Monsanto’s research operation was less well known than Bell Labs’ and taking the Monsanto job seemed to be a bit of a risk. But Craford, like his hero—adventurer Richard Burton, who spent years seeking the source of the Nile—has little resistance to choosing the less well-trodden path. Besides, “Gallium phosphide just didn’t seem right,” said Craford, “but who knew?” In his early days at Monsanto, Craford experimented with both lasers and LEDs. He focused on LEDs full time when it became clear that the defects he and his group were encountering in growing GaAsP on GaAs substrates would not permit fabrication of competitive lasers. [He] didn’t toot his own horn. “When George [Craford] published the work, he put the names of the guys he had growing crystals and putting the things together ahead of his name.” —Nick Holonyak The breakthrough that allowed Craford and his team to go beyond Holonyak’s red LEDs to create very bright orange, yellow, and green LEDs was prompted, ironically, by Bell Labs. A Bell researcher who gave a seminar at Monsanto mentioned the use of nitrogen doping to make indirect semiconductors act more like direct ones. Direct semiconductors are usually better than indirect for LEDs, Craford explained, but the indirect type still has to be used because of band gaps wide enough to give off light in the green, yellow, and orange part of the spectrum. The Bell researcher indicated that the labs had had considerable success with Zn-O doping of gallium phosphide and some success with nitrogen doping of gallium phosphide. Bell Labs, however, had published early experimental work suggesting that nitrogen did not improve GaAsP LEDs. Nonetheless, Craford believed in the promise of nitrogen doping rather than the published results. “We decided that we could grow better crystal and the experiment would work for us,” he said. A small team of people at Monsanto did make it work. Today, some 25 years later, these nitrogen-doped GaAsP LEDs still form a significant proportion—some 5-10 billion—of the 20-30 billion LEDs sold annually in the world today. “The initial reaction was, ‘Wow, that’s great, but our customers are very happy with red LEDs. Who needs other colors?’” —George Craford Again, Holonyak complains, Craford didn’t toot his own horn. “When George published the work, he put the names of the guys he had growing crystals and putting the things together ahead of his name.” His peers, however, have recognized Craford as the creative force behind yellow LEDs, and he was recently made a member of the National Academy of Engineering to honor this work. Craford recalls that the new palette of LED colors took some time to catch on. “The initial reaction,” he said, “was, ‘Wow, that’s great, but our customers are very happy with red LEDs. Who needs other colors?’” Westward ho! After the LED work was published, a Monsanto reorganization bumped Craford up from the lab bench to manager of advanced technology. One of his first tasks was to select researchers to be laid off. He recalls this as one of the toughest jobs of his life, but subsequently found that he liked management. “You have more variety; you have more things that you are semi-competent in, though you pay the price of becoming a lot less competent in any one thing,” he told IEEE Spectrum. Soon, in 1974, he was bumped up again to technology director, and moved from Monsanto’s corporate headquarters in St. Louis to its electronics division headquarters in Palo Alto, Calif. Craford was responsible for research groups developing technology for three divisions in Palo Alto, St. Louis, and St. Peters, Mo. One dealt with compound semiconductors, another with LEDs, and the third with silicon materials. He held the post until 1979. Even as a manager, he remained a “scientist to the teeth,” said David Russell, Monsanto’s director of marketing during Craford’s tenure as technology director. “He is a pure intellectual scientist to a fault for an old peddler like me.” Though always the scientist, Craford also has a reputation for relating well to people. “George is able to express complicated technical issues in a way that all of us can understand,” said James Leising, product development manager for HP’s optoelectronics division. Leising recalled that when he was production engineering manager, a position that occasionally put him in conflict with the research group, “George and I were always able to work out the conflicts and walk away friends. That wasn’t always the case with others in his position.” One time in particular, Leising recalled, Craford convinced the production group of the need for precise control of its processes by graphically demonstrating the intricacies of the way semiconductor crystals fit upon one another. As an executive, Craford takes credit for no individual achievements at Monsanto during that time, but said, “I was proud of the fact that, somehow, we managed to be worldwide competitors in all our businesses.” Even so, Monsanto decided to sell off its optoelectronics business and offered Craford a job back in St. Louis, where he would have been in charge of research and development in the company’s silicon business. Craford thought about this offer long and hard. He liked Monsanto; he had a challenging and important job, complete with a big office, oak furniture, a private conference room, and a full-time administrative assistant. But moving back to St. Louis would end his romance with those tiny semiconductor lights that could make a Dewar glow, and when the time came, he just couldn’t do it. He did the Silicon Valley walk: across the street to the nearest competitor, in this case, Hewlett-Packard Co. Instead, he did the Silicon Valley walk: across the street to the nearest competitor, in this case, Hewlett-Packard Co. The only job it could find that would let him work with LEDs was a big step down from technology director—a position as R&D section manager, directing fewer than 20 people. This meant a cut in salary and perks, but Craford took it. The culture was different, to say the least. No more fancy office and private conference room; at HP Craford gets only “a cubby, a tin desk, and a tin chair.” And, he told Spectrum, “I love it.” He found the HP culture to be less political than Monsanto’s, and believes that the lack of closed offices promotes collaboration. At HP, he interacts more with engineers, and there is a greater sense that the whole group is pulling together. It is more open and communicative—it has to be, with most engineers’ desks merely 1.5 meters apart. “I like the whole style of the place,” he declared. Now he has moved up, to R&D manager of HP’s optoelectronics division, with a larger group of engineers under him. (He still has the cubby and metal desk, however.) As a manager, Craford sees his role as building teams, and judging which kinds of projects are worth focusing on. “I do a reasonably good job of staying on the path between being too conservative and too blue sky,” he told Spectrum. “It would be a bad thing for an R&D manager to say that every project we’ve done has been successful, because then you’re not taking enough chances; however, we do have to generate enough income for the group on what we sell to stay profitable.” Said Fred Kish, HP R&D project manager under Craford: “We have embarked upon some new areas of research that, to some people, may have been questionable risks, but George was willing to try.” Craford walks that path between conservatism and risk in his personal life as well, although some people might not believe it, given his penchant for adventurous sports: skydiving, whitewater canoeing, marathon running, and rock climbing. These are measured risks, according to Craford: ‘‘The Grand Teton is a serious mountain, but my son and I took a rock-climbing course, and we went up with a guy who is an expert, so it seemed like a manageable risk.” Holonyak recalls an occasion when a piece of crystal officially confined to the Monsanto laboratory was handed to him by Craford on the grounds that an experiment Holonyak was attempting was important. Craford “could have gotten fired for that, but he was willing to gamble.” “I hope to see the day when LEDs will illuminate not just a Dewar but a room.” —George Craford Craford is also known as being an irrepressible asker of questions. “His methods of asking questions and looking at problems brings people in the group to a higher level of thinking, reasoning, and problem-solving,’’ Kish said. Holonyak described Craford as “the only man I can tolerate asking me question after question, because he is really trying to understand.” Craford’s group at HP has done work on a variety of materials over the past 15 years, including gallium aluminum arsenide for high-brightness red LEDs and, more recently, aluminum gallium indium phosphide for high-brightness orange and yellow LEDs. The latest generation of LEDs, Craford said, could replace incandescent lights in many applications. One use is for exterior lighting on automobiles, where the long life and small size of LEDs permit car designers to combine lower assembly costs with more unusual styling. Traffic signals and large-area display signs are other emerging applications. He is proud that his group’s work has enabled HP to compete with Japanese LED manufacturers and hold its place as one of the largest sellers of visible-light LEDs in the world. Craford has not stopped loving the magic of LEDs. “Seeing them out and used continues to be fun,” he told Spectrum. “When I went to Japan and saw the LEDs on the Shinkansen [high-speed train), that was a thrill.” He expects LEDs to go on challenging other forms of lighting and said, “I still hope to see the day when LEDs will illuminate not just a Dewar but a room.” Editor’s note: George Craford is currently a fellow at Philips LumiLEDs. He got his wish and then some.
  • The Fall and Rise of Russian Electronic Warfare
    Jul 30, 2022 08:00 AM PDT
    A month into Russia’s invasion, Ukrainian troops stumbled upon a nondescript shipping container at an abandoned Russian command post outside Kyiv. They did not know it then, but the branch-covered box left by retreating Russian soldiers was possibly the biggest intelligence coup of the young war. Inside were the guts of one of Russia’s most sophisticated electronic warfare (EW) systems, the Krasukha-4. First fielded in 2014, the Krasukha-4 is a centerpiece of Russia’s strategic EW complement. Designed primarily to jam airborne or satellite-based fire control radars in the X- and Ku-bands, the Krasukha-4 Is often used alongside the Krasukha-2, which targets lower-frequency S-band search radars. Such radars are used on stalwart U.S. reconnaissance platforms, such as the E-8 Joint Surveillance Target Attack Radar System (JSTARS) and Airborne Warning and Control System, or AWACS, aircraft. And now Ukraine, including by extension its intelligence partners in NATO, had a Krasukha-4 to dissect and analyze. That Russian troops would ditch the heart of such a valuable EW system was surprising in March, when Moscow was still making gains across the country and threatening Kyiv. Five months into the war, it is now apparent that Russia’s initial advance was already faltering when the Krasukha-4 was left by the roadside. With highways around Kyiv clogged by armored columns, withdrawing units needed to lighten their load. The abandoned Krasukha-4 was emblematic of the puzzling failure of Russian EW in the first few months of Russia’s invasion. After nearly a decade of owning the airwaves during a Moscow-backed insurgency in eastern Ukraine, EW was not decisive when Russia went to war in February. The key questions now are, why was this so, what is next for Russian EW in this oddly anachronistic war, and how might it affect the outcome? At least three of Russia’s five electronic warfare brigades are engaged in Ukraine. And with more exposure to NATO-supplied radios, experienced Russian EW operators who cut their teeth in Syria are beginning to detect and degrade Ukrainian communications. Electronic warfare is a pivotal if invisible part of modern warfare. Military forces rely on radios, radars, and infrared detectors to coordinate operations and find the enemy. They use EW to control the spectrum, protecting their own sensing and communications while denying access to the electromagnetic spectrum by enemy troops. U.S. military doctrine defines EW as comprising electronic attack (EA), electronic protection, and electronic support. The most familiar of these is EA, which includes jamming, where a transmitter overpowers or disrupts the waveform of a hostile radar or radio. For instance, the Russian R-330Zh Zhitel jammer can reportedly shut down—within a radius of tens of kilometers—GPS, satellite communications, and cellphone networks in the VHF and UHF bands. Deception is also part of EA, in which a system substitutes its own signal for an expected radar or radio transmission. For example, Russian forces sent propaganda and fake orders to troops and civilians during the 2014–2022 insurgency in eastern Ukraine by hijacking the local cellular network with the RB-341V Leer-3 system. Using soldier-portable Orlan-10 drones managed by a truck-mounted control system, the Leer-3 can extend its range and impact VHF and UHF communications over wider areas. The Zhitel jamming system can shut down, over tens of kilometers, GPS and satellite communications. This image shows the base of one of the four antennas in a typical setup.informnapalm.org The converse of electronic attack is electronic support (ES), which is used to passively detect and analyze an opponent’s transmissions. ES is essential for understanding the potential vulnerabilities of an adversary’s radars or radios. Therefore, most Russian EA systems include ES capabilities that allow them to find and quickly characterize potential jamming targets. Using their ES capabilities, most EA systems can also geolocate enemy radio and cellphone transmissions and then pass that information on so that it can be used to direct artillery or rocket fire—with often devastating effects. A few Russian systems conduct ES exclusively; one example is the Moskva-1, which is a precision HF/VHF receiver that can use the reflections of TV and radio signals to conduct passive coherent location or passive radar operations. Basically, the system picks up the radio waves of commercial TV and radio transmitters in an area, which will reflect off targets like ships or aircraft. By triangulating among multiple sets of received waves, the target can be pinpointed with sufficient accuracy to track it and, if needed, shoot at it. Key Russian Electronic Warfare Systems Deployed in Ukraine Electronic Warfare System Purpose First Fielded Notes 1RL257 Krasukha-4 Targets X-band and K u-band radars, particularly on planes, drones, missiles, and low-orbit satellites 2014 Consists of two KamAZ-6350 trucks, one a command post and the other outfitted with sensors 1L269 Krasukha-2 Targets S-band radars, particularly on airborne platforms. Often used paired with the Krasukha-4 2011 Also based on two KamAZ-6350 trucks RB-341V Leer-3 Disrupts VHF and UHF communications, including cellular communications and military radios, over hundreds of kilometers 2015 Consists of a truck-based command post that works with Orlan-10 drones to extend its range RH-330Zh Zhitel Jammer; can shut down GPS and satellite communications over a radius of tens of kilometers 2011 Consists of a truck command post and four telescopic-mast phased-array antennas Murmansk-BN Long-range detection and jamming of HF military radios 2020 Russian sources claim it can jam communications thousands of kilometers away R-934B VHF/UHF jammer that targets wireless and wired communications 1996 Consists of either a truck or a tracked vehicle and a towed 16-kilowatt generator SPN-2, 3, 4 X- or K u-band jammers that target airborne radars and air-to-surface guidance-control radars (not available) Consists of a combat-control vehicle and an antenna vehicle Repellent-1 Antidrone system 2016 Weighs more than 20 tonnes Moéskva-1 Precision HF/VHF receiver for passive coherent location of enemy ships and planes 2015 Published sources cite a range of up to 400 kilometers Sources: Wikipedia; Military Factory; Global Defence Technology; U.S. Army; Air Power Australia; U.S. Army Training and Doctrine Command; Russian Electronic Warfare: The Role of Electronic Warfare in the Russian Armed Forces, Jonas Kjellén, Swedish Defence Research Agency (FOI), 2018; Defence24 Russia uses specialized electronic-warfare units to conduct its EA and ES operations. In its ground forces, dedicated EW brigades of several hundred soldiers are assigned to the five Russian military districts—West, South, North, Central, and East—to support regional EW operations that include disrupting enemy surveillance radars and satellite communication networks over hundreds of kilometers. EW brigades are equipped with the larger Krasukha-2 and -4, Leer-3, Moskva-1, and Murmansk-BN systems (the latter of which detects and jams HF radios). Each Russian army maneuver brigade also includes an EW company of about 100 personnel that is trained to support local actions within about 50 kilometers using smaller systems, like the R-330Zh Zhitel. Militaries use electronic protection (EP), also known as electronic countermeasures, to defend against EA and ES. Long considered an afterthought by western forces after the Cold War, EP has risen again to be perhaps the most important aspect of EW as Russia and China field increasingly sophisticated jammers and sensors. EP includes tactics and technologies to shield radio transmissions from being detected or jammed. Typical techniques include using narrow beams or low-power transmissions, as well as advanced waveforms that are resistant to jamming. Experts have long touted Russia as having some of the most experienced and best-equipped EW units in the world. So in the early days of the 24 February invasion, analysts expected Russian forces to quickly gain control of, and then dominate, the electromagnetic spectrum. Since the annexation of Crimea in 2014, EW has been a key part of Russian operations in the “gray zone,” the shadowy realm between peace and war, in the Donbas region. Using Leer-3 EW vehicles and Orlan-10 drones, Moscow-backed separatists and mercenaries would jam Ukrainian communications and send propaganda over local mobile-phone networks. When Russian forces were ready to strike, the ground and airborne systems would detect Ukrainian radios and target them with rocket attacks. But after nearly a decade of rehearsals in eastern Ukraine, when the latest escalation and invasion began in February, Russian EW was a no-show. Ukrainian defenders did not experience the jamming they faced in the Donbas and were not being targeted by drones or ground-based electronic surveillance. Although Russian forces did blow up some broadcast radio and television towers, Ukraine’s leaders continued to reach the outside world unimpeded by Russian EW. Using counter-drone systems provided by the United States before the invasion, Ukrainian troops have downed hundreds of Russian drones by jamming their GPS signals or possibly by damaging their electronics with high-powered microwave beams. Russia is gaining the upper hand now, having consolidated control in Ukraine's east and south as the invaded country begins running out of soldiers, weapons, and time. With more defined front lines and better logistics support from their homeland, Russian troops are now using their EW systems to guide artillery and rocket strikes. But instead of being the leading edge of Russia’s offensive, EW is coming into play only after Moscow resorted to siege tactics that call to mind the origins of EW in World War I. The RF spectrum was a lot less busy then. Commanders used their new radios to coordinate troop movements and direct fire and employed early passive direction-finding equipment to locate or listen to enemy radio transmissions. While communications jamming emerged at the same time, it was not widely employed. Radio operators realized that simply keying their systems could send out a blast of white noise to drown the transmissions of other radios operating at the same frequencies. But this tactic had limited operational value, because it also prevented forces doing the jamming from using the same radio frequencies to communicate. Moreover, warfare happened slowly enough that the victim could simply wait out the jammer. Thus, World War I EW was exemplified by passive detection of radio transmissions and infrequent, rudimentary jamming. The shift to more sophisticated EW systems and tactics occurred with World War II, when technological advances made airborne radars and jammers practical, better tuners allowed jamming and communicating on separate frequencies, and the increased tempo of warfare gave combatants an incentive to not just jam enemy transmissions but to intercept and exploit them as well. Consider the Battle of Britain, when the main challenge for German pilots was reaching the right spot to drop their bombs. Germany used a radio-beacon system it called Knickebein (“crooked leg” in English) to guide its bombers to British aircraft factories, which the British countered with fake beacons that they code-named Aspirin. To support British warplanes attacking Germany in 1942, the Royal Air Force (RAF) fielded the GEE hyperbolic radio navigation system that allowed its bomber crews to use transmissions from British ground stations to determine their in-flight positions. Germany countered with jammers that drowned out the GEE transmissions. The World War II EW competition extended to sensing and communication networks. RAF and U.S. bombers dispensed clouds of metallic chaff called Window that confused German air-defense radars by creating thousands of false radar targets. And they used VHF communication jammers, which the British called Jostle, to interfere with German ground controllers attempting to vector fighters toward allied bombers. The move-countermove cycle accelerated in response to Soviet military aggressions and advances in the 1950s. Active countermeasures such as jammers or decoys proliferated, thanks to technological advances that enabled EW systems with greater power, wider frequency ranges, and more complex waveforms, and which were small enough to fit aircraft as well as ships. Later, as Soviet military sensors, surface-to-air missiles, and antiship cruise missiles grew in their sophistication and numbers, the U.S. Department of Defense sought to break out of the radar-versus-electronic-attack competition by leveraging emerging materials, computer simulation, and other technologies. In the years since, the U.S. military has developed multiple generations of stealth aircraft and ships with severely reduced radio-frequency, infrared, acoustic, and visual signatures. Russia followed with its own stealth platforms, albeit more slowly after the Soviet Union’s collapse. But today, years of underfunded aviation training and maintenance and the rapid introduction by NATO of Stinger shoulder-launched surface-to-air missiles have largely grounded Russian jets and helicopters during the Ukraine invasion. So when Russian troops crossed the border, they faced a situation not unlike the armies of World War I. Without airpower, the Russian assault crawled at the speed of their trucks and tanks. And although they proved effective in the Donbas during the last decade, Russian drones are controlled by line-of-sight radios operating in the Ka- and Ku-bands, which prevented them from straying too far from their operators on the ground. With Russian columns moving along multiple axes into Ukraine and unable to send EW drones well over the horizon, any jamming of Ukrainian forces, some of which were interspersed between Russian formations, would have also taken out Russian radios. Russian EW units did use Leer-3 units to find Ukrainian fighters via their radio and cellphone transmissions, as they had in the Donbas. But unlike Ukraine’s rural east, the areas around Kyiv are relatively densely populated. With civilian cellphone transmissions mixed in with military communications, Russian ES systems were unable to pinpoint military transmitters and use that information to target Ukrainian troops. Making matters worse for the Russians, Ukrainian forces also began using the NATO Single-Channel Ground and Airborne Radio System, or SINCGARS. Ukrainian troops had trained for a decade with SINCGARS, but the portable VHF combat radios were scarce until the lead-up to the Russian invasion, when the flood of NATO support sent SINCGARS radios to nearly every Ukrainian ground unit. Unlike Ukraine’s previous radios, which were Russian-built and included backdoors for the convenience of Russian intelligence, SINCGARS have built-in encryption. To protect against jamming and interception, SINCGARS automatically hops among frequencies up to 100 times a second across its overall coverage of 30 to 88 megahertz. Because SINCGARS can control signals within 25-kilohertz bands, the user can select among more than 2,000 channels. As in World War I, the lack of airpower also affected the speed of conflict. The widely circulated videos of Russian armored convoys stuck along the roads around Kyiv were a stark reminder that ground operations can only move as fast as their fuel supply. In World War II and the Cold War, bombing missions and other air operations happened so quickly that even if jamming impacted friendly forces, the effect would be temporary, as the positions of jammers, jamming targets, and bystanders would quickly change. But when Russian forces were trundling toward the urban areas of northern Ukraine, they were going so slowly that they were unable to exploit changing geometries to get their jammers into positions from which they could have substantial effects. At the same time, Russian troops were not sitting still, which prevented them from setting up a large system like the Krasukha-4 to blind NATO radars in the air and in space. Russian EW is gaining an advantage only now because Moscow’s strategy of quickly taking Kyiv failed, and it shifted to a grinding war of attrition in Ukraine’s south. So what’s next? The Kremlin’s fortunes have improved now that its soldiers are fighting from Russian-held territory in Ukraine’s east. No longer spread out along multiple lines in suburban areas, invading troops are now able to use EW to support a strategy of incrementally gaining territory by finding Ukrainian positions and overwhelming them with Russia’s roughly 10-to-1 advantage in artillery. As of this writing, at least three of Russia’s five EW brigades are engaged in Ukraine. And with more exposure to NATO-supplied radios, experienced Russian EW operators who cut their teeth in the last decade of war in Syria are beginning to detect and degrade Ukrainian communications. EW brigades are using the Leer-3’s Orlan-10 drones to detect Ukrainian artillery positions based on their radio emissions, although the encryption and frequency hopping of SINCGARS radios makes them hard to intercept and exploit. Because the front lines are now better defined compared to the early war around Kyiv, Russian forces can assume the detections are from Ukrainian military units and direct artillery and rocket fire against those locations. Russian troops are using Orlan-10 drones [foreground] in conjunction with the Leer-3 electronic-warfare system (which includes the truck in the background) to identify and attack Ukrainian units. iStockphoto The Krasukha-4, which was too powerful and unwieldy to be useful during the assault on Kyiv, is also making a reappearance. Exploiting Russia’s territorial control in the Donbas, EW brigades are using the Krasukha-4 to jam the radars on such Ukrainian drones as the Bayraktar TB2, and to interfere with their communication links, preventing Ukrainian forces from locating Russian artillery emplacements. To gain flexibility and mobility leading up to the invasion, the Russian army broke its 2,000-soldier maneuver brigades into smaller battalion tactical groups (BTGs) of 300 to 800 personnel in such a way that each included a portion of the original maneuver brigade’s EW company. Today, BTGs operating in southern and eastern Ukraine are employing shorter-range VHF-UHF electronic attack systems like the R-330Zh Zhitel to disable Ukrainian drones ranging from Bayraktar TB2s to small DJI Mavics by jamming their GPS signals. BTGs are also attacking Ukrainian communications using R-934B VHF and SPR-2 VHF/UHF jammers, with some success. Although Ukrainian soldiers have SINCGARS radios, they still rely on vulnerable cellphones and radios without encryption or frequency hopping when SINCGARS is down or unavailable. But Ukraine is fighting back against Russia’s spectrum assault. Using counter-drone systems provided by the United States before the invasion, Ukrainian troops have downed hundreds of Russian drones by jamming their GPS signals or possibly by damaging their electronics with high-powered microwave beams, a specific type of EA where electromagnetic energy is used to generate high voltages in sensitive microelectronics that damage transistors and integrated circuits. Ukrainian forces are also leveraging U.S.-supplied EW systems and training to jam Russian communications. Unlike their Ukrainian counterparts, Russian troops do not have a system like SINCGARS and often rely on cellphones or unencrypted radios to coordinate operations, making them susceptible to Ukrainian geolocation and jamming. In this way, stabilization of the front lines also helps Ukraine’s EW efforts because it allows quick correlation of transmissions to locations. Ukraine’s defenders also exploited a weakness of the large and powerful Russian EW systems—they are easy to find. Using U.S.-supplied ES gear, Ukrainian troops have been able to detect transmissions from systems like the Leer-3 or Krasukha-4 and direct rocket, artillery, and drone counterattacks against the truck-borne Russian systems. The Ukraine invasion shows EW can change the course of a war, but it’s also showing that the fundamentals still matter. Without airpower or satellite-guided drones, Russia’s army could not get jammers over the horizon to degrade Ukrainian communications and radars in advance of troops moving on Kyiv. Forced to use short-range unmanned aircraft and ground systems, Russian EW brigades operating with BTGs had to worry about interfering with friendly operations and could not distinguish Ukrainian troops from civilians. They also had to stay on the move, reducing the utility of their large multivehicle EW systems. Russian EW is gaining an advantage only now because Moscow’s strategy of quickly taking Kyiv failed, and it shifted to a grinding war of attrition in Ukraine’s south. So for now, unable to reach over the horizon, Russian EW ground units can jam Ukrainian troops only when they are separated by clearly defined battle lines. They are relying on systems like the Leer-3 to find Ukrainian emissions so Russian artillery can then overwhelm the defenders with volleys of shells and rockets. Russian EW systems like the Krasukha-4 and R-330Zh Zhitel can disable GPS or radars on Ukrainian drones, but it’s not substantially different from shooting down aircraft with guns. And although ES systems like the Moskva-4 could hear signals over the horizon, Russia is running out of the long-range missiles that could exploit such detections. Perhaps the biggest lesson from Ukraine for EW is that winning the airwaves does not equal winning the war. Russia is on top of the EW war now only because its lighting assault became a pulverizing slog. The situation could quickly flip if Kyiv’s troops, with western support, regain control of Ukraine’s skies, where they could electronically and physically disrupt the management and logistics that keep Russia’s rickety war machine trundling along.
  • Covert Actions Heighten Ukraine’s Nuclear Peril
    Jul 29, 2022 01:37 PM PDT
    In March, when Russia seized Ukraine’s Zaporizhzhia power plant—Europe’s largest—the actions veered dangerously close to nuclear disaster. National Public Radio, in the U.S., analyzed video and photos from the attack and showed, among other too-close calls, that ordnance landed some 75 meters away from a reactor building. Since then, Zaporizhzhia’s grounds have been used to shelter troops, military equipment, and munitions. According to the Ukrainian government, Russia has also fired cruise missiles over two more nuclear power stations. Yet, recent evidence suggests that a more opaque threat may also be stalking Ukraine’s four nuclear generating stations: a cloak-and-dagger struggle for control of state nuclear energy firm Energoatom, pitting activist nuclear professionals against alleged Russian agents. It’s an unstable situation that—like Russia’s military actions—increases the risk of accidents that could spread radiation across Europe and threatens Ukraine’s ability to defend itself. Ukraine's 15 reactors generate over half of its electricity. Meanwhile, thanks to Ukraine’s rapid post-invasion synchronization with Europe’s power grid, increasing electricity exports are also helping the embattled nation to finance the war. But already Ukraine faces the loss of Zaporizhzhia’s power generation, with Russia vowing to hold the surrounding territory indefinitely and rebuilding wrecked transmission lines to reroute the plant's power to occupied Crimea. Energoatom director of personnel Oleg Boyarintsev is pictured here, having returned to work after detention and questioning by Ukrainian counterintelligence agents. Energoatom The murky internal battle for Ukraine’s nuclear power popped into sight briefly in late March when a few Ukrainian news outlets and IEEE Spectrum reported that Ukrainian counterintelligence officers had detained and questioned Energoatom director of personnel Oleg Boyarintsev. That cast a shadow over officials across Energoatom that Boyarintsev had appointed. The conflict quickly slipped back behind the scenes. But Energoatom and its leadership are back in the spotlight. Battle lines have stabilized, and President Volodymyr Zelenskyy is leading a campaign to out Russian agents. This month Zelenskyy affirmed pervasive infiltration of Ukraine’s state security service, the SBU, which routinely places officials at Energoatom headquarters and its plants. At the same time, moves by SBU counterintelligence agents, deputies in Ukraine’s parliament, and company officials have heightened concerns about the security and safety of Energoatom’s operations. SBU spy hunters said they pierced an “extensive agent network” last month allegedly led by Boyarintsev’s longtime political patron and business partner Andriy Derkach, whom the SBU and U.S. intelligence agencies say is a Russian agent. Then, early this month, Energoatom CEO Petro Kotin stunned a panel of deputies probing Energoatom personnel issues. Asked why Boyarintsev was not present as requested, Kotin told the energy committee he had the day off. Then Kotin gave contradictory answers when asked why he recently dismissed the director of the Rivne Nuclear Power Plant, which lies just under 60 kilometers from Belarus and is the largest still under Ukraine’s control. Under CEO Petro Kotin, Energoatom has faced repeated accusations of corruption and sliding back toward Russian influence. Ukrinform/Alamy Kotin said Rivne’s director was suspected of hiding safety violations. At the same time Kotin insisted he was also needed to run the subsidiary racing to start up a recently completed facility to store spent nuclear fuel that was previously sent to Russia. Without the storage facility, Ukraine can’t refuel its reactors, prompting the panel’s chair to note that Kotin assigned an allegedly dodgy official a surprisingly critical mission. Ukrainian news site Glavcom’s take from the hearing was that Ukraine’s nuclear plants were “in danger,” and that a “hunt for collaborators” was on. The panel’s deputy chairman concurred, posting that “Russian ears are sticking out now from all sides.” Codename “Veteran” Energoatom did not respond to IEEE Spectrum’s requests to reach Boyarintsev, Kotin and other officials. But back in March, the firm attacked its loudest critic, Olga Kosharna, a former advisor to Ukraine’s nuclear regulator. Energoatom said it was Kosharna who was under Russian influence and spreading Russian disinformation. A defamation suit filed by Kosharna against Energoatom will be heard in October according to a Facebook post from her lawyer, who heads the energy-law committee for Ukraine’s bar association. Kosharna maintains her March 2022 claim that officials planted by Boyarintsev facilitated the Zaporizhzhia plant’s seizure, including a new plant director appointed eight days before the 24 February invasion. In communications with IEEE Spectrum, she extended that allegation to include Alexander Prismitsky, an SBU officer serving as the plant’s deputy director for physical protection, who she said is the subject of an SBU investigation. “Russia ... attract[ed] its agents into all the spheres! So, our task for today is to detect them all as soon as possible.” —Margaryta Rayets, Women in Nuclear Ukraine Boyarintsev did not act alone, according to Kosharna. Andriy Derkach, who the SBU says worked for Russian intelligence under the codename “Veteran,” is suspected of directing Boyarintsev's work at Energoatom. Derkach is a long-serving Ukrainian deputy, a pro-Russian media commentator, and a former Energoatom CEO. His whereabouts since the invasion are unknown. Derkach gained global notoriety delivering alleged kompromat on U.S. President Joe Biden in 2019. In spite of that, he is widely credited with driving Boyarintsev’s inclusion when Zelenskyy appointed Kotin and a new leadership team in 2020. Why else, ask people like Kosharna and other nuclear professionals, could someone with unsavory associates in organized crime win a job so crucial to Ukraine’s security? Since Kotin’s team arrived at Energoatom, journalists, activists, and government watchdogs have documented a series of suspicious activities including the dumping of electricity on the market, the illegal dismissal of Energoatom’s independent anti-corruption official, and embezzlement of funds for the long-delayed spent-fuel repository. Meanwhile, a slide back toward Russian influence is now feared concerning Ukraine’s Russian-designed and mostly Russian-fuelled nuclear plants. Ukrainian security analyst Pavel Kost, who several years ago praised Energoatom as one of the “quiet heroes” of post-Yanukovich Ukraine, last year called out the growing influence of “pro-Russian circles” and “silent sabotage” of crucial projects such as the spent-fuel repository. It’s no surprise, then, that over half of Ukraine’s parliamentarians called last year for new leadership to improve Energoatom’s operations and assure nuclear safety. Jeff Merrifield, a former U.S. Nuclear Regulatory Commission member and international nuclear consultant, likened the situation facing Ukraine’s nuclear plants to a “multilayer set of chess.” While he declined to address the specific accusations against Energoatom leaders, Merrifield said they “were not entirely surprising” based on some of the “unsavory” activity he’s observed in 20 years of work in both Ukraine and Russia. Kosharna, meanwhile, is not the only Ukrainian professional asking tough questions. When asked if many nuclear staff in Ukraine are concerned about Russian agents, Women in Nuclear Ukraine founder Margaryta Rayets messaged IEEE Spectrum that, “Russia did its best in terms of lobbying its interests by attracting its agents into all the spheres! So, our task for today is to detect them all as soon as possible.” Rivne on the edge The loudest critical voice among engineers and scientists (at least in writing) is Georgiy Balakan, a former top Energoatom engineer who led collaborations with U.S. national labs, Westinghouse Electric, and European agencies to upgrade safety at Ukraine’s plants. Since April he has posted a series of risk assessments, warnings, and questions aimed at securing Ukraine’s nuclear plants against internal and external attack. On 10 July, Balakan posted a pointed essay titled “How to avoid nuclear ‘Bucha’ at the nuclear power plants of Ukraine?”—a reference to Russian forces’ scorched-earth devastation of Bucha that shocked the world in April. In it he calls for terminating senior plant officials who are past or present SBU officers, a moratorium on dismissing plant directors, and more. The panel’s deputy chairman concurred, posting that “Russian ears are sticking out now from all sides.” An accompanying post emphasized the risks facing the Rivne station. Balakan warns that Russia could seize Rivne via an airborne assault, noting increased Russian activity nearby in Belarus and stepped-up airborne training. Balakan also argued that the attacks on Rivne's director, Pavlo Pavlyshyn, weaken Rivne by demoralizing plant personnel. Energoatom officials scattered from its headquarters when Russian troops and missiles surged over the border in February and March. But Rivne’s embattled director stood his ground, meeting journalists to condemn Russia’s irresponsible attacks on nuclear energy installations and garnering international support. “From the first days of the war, his steadfast patriotic position united everyone,” agreed the City Council of Varash, Rivne’s satellite city, in a recent appeal to Zelenskyy to stop the plant’s “destabilization.” The letter echoed Balakan’s concerns about a “high probability of an armed attack,” and disputed Kotin’s allegations against Pavlyshyn and the plant’s safety. Ilona Zayets, a journalist and former Energoatom communications aide, told IEEE Spectrum this week that Kotin and his supporters “need to discredit” Pavlyshyn before he gets to Zelenskyy, because Pavlyshyn has the inside scoop on Energoatom’s troubled projects. If she’s right, they may be too late. Pavlyshyn posted a video this week suggesting that he’s already working against Kotin: ”Dear curators of my resignation. Your involvement in unlawful actions not in the interests of the Ukrainian state will certainly be exposed.” Editor’s note: This story was originally published on 29 July 2022 and subsequently unpublished for additional editorial review. Spectrum apologizes for any confusion this story’s publication history may have caused.
  • Video Friday: Grip Anything
    Jul 29, 2022 09:00 AM PDT
    Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. IEEE CASE 2022: 20–24 August 2022, MEXICO CITY CLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELES CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND Enjoy today’s videos! A University of Washington team created a new tool that can design a 3D-printable passive gripper and calculate the best path to pick up an object. The team tested this system on a suite of 22 objects—including a 3D-printed bunny, a doorstop-shaped wedge, a tennis ball and a drill. [ UW ] Combining off-the-shelf components with 3D-printing, the Wheelbot is a symmetric reaction wheel unicycle that can jump onto its wheels from any initial position. With non-holonomic and under-actuated dynamics, as well as two coupled unstable degrees of freedom, the Wheelbot provides a challenging platform for nonlinear and data-driven control research. [ Wheelbot ] I think we posted a similar video to this before, but it’s so soothing and beautifully shot and this time it’s fully autonomous. Watch until the end for a very impressive catch! [ Griffin ] Quad-SDK is an open source, ROS-based full stack software framework for agile quadrupedal locomotion. The design of Quad-SDK is focused on the vertical integration of planning, control, estimation, communication, and development tools which enable agile quadrupedal locomotion in simulation and hardware with minimal user changes for multiple platforms. [ Quad-SDK ] Zenta makes some of the best legged robots out there, including MorpHex, which appears to be still going strong. And now, a relaxing ride with MXPhoenix : [ Zenta ] We have developed a set of teleoperation strategies using human hand gestures and arm movements to fully teleoperate a legged manipulator through whole-body control. To validate the system, a pedal bin item disposal demo was conducted to show that the robot could exploit its kinematics redundancy to follow human commands while respecting certain motion constraints, such as keeping balance. [ University of Leeds ] Thanks Chengxu! An introduction to HEBI’s Robotics line of modular mobile bases for confined spaces, dirty environments, and magnetic crawling. [ HEBI Robotics ] Thanks Kamal! Loopy is a robotic swarm of 1- Degree of Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). In this iteration of Loopy, agents use average consensus to determine the orientation of a received shape that requires the least amount of movement. In this video, Loopy is spelling out the Alphabet. [ WVUIRL ] The latest robotics from DLR, as shared by Bram Vanderborght. [ DLR ] Picking a specific object from clutter is an essential component of many manipulation tasks. Partial observations often require the robot to collect additional views of the scene before attempting a grasp. This paper proposes a closed-loop next-best-view planner that drives exploration based on occluded object parts. [ Active Grasp ] This novel flying system combines an autonomous unmanned aerial vehicle with ground penetrating radar to detect buried objects such as landmines. The system stands out with sensor specific low altitude flight maneuvers and high accuracy position estimation. Experiments show radar detections of targets buried in sand. [ FindMine ] In this experiment, we demonstrate a combined exploration and inspection mission using the RMF-Owl collision tolerant aerial robot inside the Nutec RelyOn facilities in Trondheim, Norway. The robot is tasked to autonomously explore and inspect the surfaces of the environment—within a height boundary—with its onboard camera sensor given no prior knowledge of the map. [ NTNU ] Delivering donuts to our incredible Turtlebot 4 development team! Includes a full walkthrough of the mapping and navigation capabilities of the Turtlebot 4 mobile robot with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics. Create a map of your environment using SLAM Toolbox and learn how to get your Turtlebot 4 to navigate that map autonomously to a destination with the ROS 2 navigation stack. [ Clearpath ]
  • Will a Baby Born Today Grow up to Live Like the Jetsons?
    Jul 29, 2022 08:00 AM PDT
    “The Jetsons” premiered on ABC on 23 September 1962. Set 100 years in the future, the animated comedy series followed the Jetson family—George and Jane and their kids, Judy and Elroy—as they went about their futurist yet surprisingly relatable lives in Orbit City, in a tech-laden house on adjustable columns, commuting by flying car and gliding along moving sidewalks, even in their own home. Although the series ran for just one season, the Jetsons and their wacky space-age world have remained pop-culture touchstones ever since, thanks to near-continuous syndication, two later seasons in the 1980s, and several TV specials and movies. And, of course, merchandise. A year after the premiere of “The Jetsons,” Aladdin Industries manufactured the steel lunch box pictured above. Plain, dome-style lunch boxes had long been used by factory workers, but this one, adorned with the Jetson family, Astro the dog, and Rosey the robot maid, was intended for the grade-school set. Food technology is a recurring theme on the show. In Episode 1, the Foodarackacycle, a push-button meal dispenser, badly malfunctions, leading to the hiring of Rosey, the robot maid with a Brooklyn accent and a heart of gold. In a later episode, Jane Jetson uses the (rotary) Dial-a-Meal to order up a complete breakfast, including burnt toast, in pill form—yet coffee remains a liquid served in a cup. And in the opening credits, Elroy zooms off to school in a space pod but toting a solidly 20th-century lunch box. If the Jetsons’ food tech was a push button too far, many of the other gadgets that they used are commonplace today: video calls, e-readers and tablets, smart watches. I am still waiting for my jet pack and flying car. For a deep dive into the culture and technology of “The Jetsons,” check out Matt Novak’s Paleofuture blog. Back in 2012, for the show’s 50th anniversary, he dissected all 24 episodes of the original series. Not all of the episode breakdowns are still available, but his discourse on Episode 1 is worth a read. “The Jetsons” was the first program broadcast in color on ABC. Unfortunately, only viewers in a few select markets—Chicago, Detroit, Los Angeles, New York, and San Francisco—could actually see the show in color. Elsewhere, it aired in black and white. The network had been slow to jump on the color-TV bandwagon, not quite convinced the technology was here to stay. After all, RCA had just started turning a profit on color televisions the previous year, and only 3 percent of Americans owned color televisions at the time. The series ran for just one season, and yet the Jetsons and their wacky space-age world have remained pop-culture touchstones ever since. As a cultural historian who trained as an electrical engineer, I’m impressed by how pervasively “The Jetsons” has seeped into society. While researching this piece, I came across an academic article that used the show in a mock court case, another one that invoked George Jetson and the tragedy of the commons to analyze the problem of waste management, and countless news articles that name-checked the Jetsons in their headlines to draw attention to new inventions. In 2007 Forbes did a ranking of the top 25 fictional companies; Spacely Space Sprockets, where George Jetson worked, came in dead last, with estimated annual sales of US $1.3 billion. The lunch box pictured at top is in the collection of the Smithsonian’s National Museum of American History. “The Jetsons” clearly has staying power. I peg much of its enduring popularity to the fact that it was in almost nonstop syndication for decades as part of the Saturday morning cartoon lineup. That’s where I first saw the show, over and over again, until a second season of 41 episodes was added in 1985. Ten more episodes came out in 1987, followed by a smattering of movies, TV specials, and direct-to-video or -DVD releases. So many kids grew up watching “The Jetsons” from the 1960s through the 1980s that the show has become a handy shortcut for talking about futuristic technology. And in the Jetsons’ world, that futuristic technology might occasionally backfire, but it is never menacing or threatening. Automation has finally delivered on its labor-saving promise, and George works just 3 hours a day, 3 days a week. Everyone is living their best lives. What’s not to love? Happy Birthday, George Jetson! And now for the impetus for this month’s column: According to devoted “Jetsons” fans and Internet sleuths, George Jetson was born right about now—as in July or August 2022. It’s a little difficult to pinpoint the exact birth date of a fictional cartoon character of the future who was introduced almost 60 years ago yet won’t reach adulthood for a couple more decades. But here is what we know. This board game came out in 1985, with the second season of “The Jetsons” and more than two decades after the original show aired.The Strong, Rochester, N.Y. Speculations about George’s birthday saw an uptick last November, with various wikis and memes suggesting that it falls on either 31 July or 27 August 2022. Although George never celebrates his birthday in any episode, fans combed the Hanna-Barbera canon to arrive at the date, even if it meant doing some mathematical gymnastics to get there. So many kids grew up watching “The Jetsons” that the show has become a handy shortcut for talking about futuristic technology. According to various Internet sources, the original promotional materials for “The Jetsons” described its setting as exactly 100 years in the future—September 2062, in other words. From the opening credits, we learn that George is a happily married, middle-aged father of two. His son, Elroy, is in elementary school, while his daughter, Judy, attends Orbit High. George’s age is revealed in the episode “Test Pilot,” in which George’s boss sends him to the doctor for an annual checkup. (George needs this physical for insurance coverage; even in the second half of the 21st century, people don’t have universal health care.) Due to a mix-up at the lab, George thinks his death is imminent, so he agrees to engage in high-risk behavior—namely, testing the survivability of the Spacely Lifejacket, a supposedly indestructible garment. George’s doctor eventually reveals the mix-up and tells George that he will live to 150. George, wearing the lifejacket and about to be fired upon by missiles, screams, “I got 110 good years ahead of me!” Do the math: He must be 40 years old. So happy birthday, Mr. Jetson. That also means the high-tech world inhabited by the Jetsons is just 40 years away. Will food replicators cook our meals by then? Maybe. Will our flying cars fold up into briefcases? Maybe not. Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the August 2022 print issue as “Lunch Box of the Future.”
  • U.S. Passes Landmark Act to Fund Semiconductor Manufacturing
    Jul 29, 2022 07:53 AM PDT
    Legislation aimed at increasing semiconductor manufacturing in the United States has finally passed both houses of Congress, following a multiyear journey that saw many mutations and delays. The CHIPS and Science Act, provides about US $52 billion over 5 years to grow semiconductor manufacturing and authorizes a 25 percent tax credit for new or expanded facilities that make semiconductors or chipmaking equipment. It’s part of a $280 billion package aimed at improving the United States’ ability to compete in future technologies. And it comes amidst efforts by other nations and regions to boost chip manufacturing, an industry increasingly seen as a key to economic and military security. “This is going to make a huge difference in how the U.S. does innovation,” says Russell T. Harrison, acting managing director of IEEE-USA, who has been involved in the legislation since its beginnings more than two years ago. The bill’s $52 billion includes $39 billion in grants for new manufacturing, $11 billion for federal semiconductor research programs and workforce development, and $2 billion for Defense Department–related microelectronics activities. “Twenty-five percent [tax credit] means we’re in it to win.” —Ian Steff, former U.S. Assistant Secretary of Commerce In addition, the bill directs $200 million over five years to the National Science Foundation to “promote growth of the semiconductor workforce.” The Commerce Department expects the United States will need 90,000 more workers in the semiconductor industry by 2025. And there’s a further $500 million for “coordinating with foreign government partners to support international information and communications technology security and semiconductor supply-chain activities, including supporting the development and adoption of secure and trusted telecommunications technologies, semiconductors, and other emerging technologies.” The 25 percent tax credit goes a long way toward making the building of new capacity in the United States comparable with building it offshore, according to Ian Steff, former Assistant Secretary of Commerce, and now a consultant advising Minnesota-based chip foundry Skywater Technology. “Twenty-five percent means we’re in it to win,” he says. The legislation has been variously sold as an opportunity to create well-paid jobs, a chance to strengthen the semiconductor supply chain following the chip shortage of 2020, and as a national-defense imperative that would lessen the concern that China might strangle the supply of 90 percent of the most advanced logic by attacking Taiwan. It might be all of that. Big chip manufacturers have been planning to add and expand fabs in anticipation of government incentives. GlobalFoundries is doing a $1 billion addition in Malta, N.Y. TSMC is already building a $12 billion facility in Arizona. And Samsung plans a $17 billion fab outside Austin, while dangling the possibility of nearly $200 billion in the future. Intel was probably the most explicit in its expectations. When it announced a plan for a $20 billion fab complex in Ohio, Keyvan Esfarjani, Intel senior vice president of manufacturing, supply chain, and operations made the strings explicit: “The scope and pace of Intel’s expansion in Ohio...will depend heavily on funding from the CHIPS Act,” he said at the time. The company said its investment could reach $100 billion over ten years with the proper government backing. Getting this far has been “an effort that has transcended administrations and gotten bipartisan support since its early inception,” says Steff. Still, the legislation was stalled for a long time. The bill that passed in Congress largely appropriates funds for things that were already authorized in a the National Defense Authorization Act of 2021, which passed in January of that year. Within the U.S. semiconductor industry much of the debate fell into what Harrison calls the “normal legislative process.” Companies or industry sectors not covered under the legislation fight to gain inclusion, while those already on the inside fight to keep it exclusive, concerned that the pool of funds will become diluted. Some initial outsiders succeeded: Chip packaging, which has grown increasingly important as advanced processor makers find they cannot get enough computing from a single sliver of silicon, was swiftly added. Efforts to expand the bill beyond its manufacturing scope continued nearly up until the end. According to reports, chip designers whose processors are manufactured by others, including AMD, Nvidia, and Qualcomm, indicated their displeasure that they would not get in on the act. Finding the balance of who’s in and who’s out meant making the terms broad enough to accomplish the goal of bringing chip manufacturing to the United States “without making it so broad that it becomes mush,” says Harrison. “They have now settled on something a little bigger than they had at first, but it’s focused on chips and their manufacture.”
  • NASA Sending Two More Helicopters to Mars
    Jul 29, 2022 07:12 AM PDT
    NASA has announced a conceptual mission architecture for the Mars Sample Return (MSR) program, and it’s a pleasant surprise. The goal of the proposed program is to return the rock samples that the Perseverance rover is currently collecting on the Martian surface to Earth, which, as you can imagine, is not a simple process. It’ll involve sending a sample-return lander (SRL) to Mars, getting those samples back to the lander, launching a rocket back to Mars orbit from the lander, and finally capturing that rocket with an orbiter that’ll cart the samples back to Earth. As you might expect, the initial idea was to send a dedicated rover to go grab the sample tubes from wherever Perseverance had cached them and bring them back to the lander with the rocket, because how else are you going to go get them, right? But NASA has decided that Plan A is for Perseverance to drive the samples to the SRL itself. Plan B, if Perseverance can’t make it, is to collect the samples with two helicopters instead. NASA’s approach here is driven by two things: First, Curiosity has been on Mars for 10 years, and is still doing great. Perseverance is essentially an improved version of Curiosity, giving NASA confidence that the newer rover will still be happily roving by the time the SRL lands. And second, the Ingenuity helicopter is also still doing awesome, which is (let’s be honest) kind of a surprise, considering that it’s a tech demo that was never designed for the kind of performance that we’ve seen. NASA now seems to believe that helicopters are a viable tool for operating on the Martian surface, and therefore should be considered as an option for Mars operations. In the new sample-return mission concept, Perseverance will continue collecting samples as it explores the river delta in Jezero crater. It’ll collect duplicates of each sample, and once it has 10 samples (20 tubes’ worth), it’ll cache the duplicates somewhere on the surface as a sort of backup plan. From there, Percy will keep exploring and collecting samples (but not duplicates) as it climbs out of the Jezero crater, where it’ll meet the sample-return lander in mid-2030. NASA says that the SRL will be designed with pinpoint landing capability, able to touch down within 50 meters of where NASA wants it to, meaning that a rendezvous with Perseverance should be a piece of cake—or as much of a piece of cake as landing on Mars can ever be. After Perseverance drives up to the SRL, a big arm on the SRL will pluck the sample tubes out of Perseverance and load them into a rocket, and then off they go to orbit and eventually back to Earth, probably by 2033. The scenario described above is how everything is supposed to work, but it depends entirely on Perseverance doing what it’s supposed to do. If the rover is immobilized, the SRL will still be able to land nearby, but those sample tubes will have to get back to the SRL somehow, and NASA has decided that the backup plan will be helicopters. The two “Ingenuity class” helicopters that the SRL will deliver to Mars will be basically the same size as Ingenuity, although a little bit heavier. There are two big differences: first, each helicopter gets a little arm for grabbing sample tubes (which weigh between 100 and 150 grams each) off of the Martian surface. And second, the helicopters get small wheels at the end of each of their legs. It sounds like these wheels will be powered, and while they’re not going to offer a lot of mobility, presumably it’ll be enough so that if the helicopter lands close to a sample, it can drive itself a short distance to get within grabbing distance. Here’s how Richard Cook, the Mars sample-return program manager at JPL, says the helicopters would work: “In the scenario where the helicopters are used [for sample retrieval], each of the helicopters would be able to operate independently. They’d fly out to the [sample] depot location from where SRL landed, land in proximity to the sample tubes, roll over to them, pick one up, then fly back in proximity to the lander, roll up to the lander, and drop [the tube] onto the ground in a spot where the [European Space Agency] sample transfer arm could pick it up and put it into the [Mars Ascent Vehicle]. Each helicopter would be doing that separately, and over the course of four or five days per tube, would bring all the sample tubes back to the lander that way.” This assumes that Perseverance didn’t explode or roll down a hill or something, and that it would be able to drop its sample tubes on the ground for the helicopters to pick up. Worst case, if Percy completely bites it, the SRL could land near the backup sample cache in the Jezero crater and the helicopters could grab those instead. Weirdly, the primary mission of the helicopters is as a backup to Perseverance, meaning that if the rover is healthy and able to deliver the samples to the SRL itself, the helicopters won’t have much to do. NASA says they could “observe the area around the lander,” which seems underwhelming, or take pictures of the Mars Ascent Vehicle launch, which seems awesome but not really worth sending two helicopters to Mars for. I’m assuming that this’ll get explored a little more, because it seems like a potential wasted opportunity otherwise. We’re hoping that this announcement won’t have any impact on JPL’s concept for a much larger, much more capable Mars Science Helicopter, but this sample-return mission (rather than a new science mission) is clearly the priority right now. The most optimistic way of looking at it is that this sample-return-mission architecture is a strong vote of confidence by NASA in helicopters on Mars in general, making a flagship helicopter mission that much more likely. But we’re keeping our fingers crossed.
  • The Richer They Get, the More Meat They Eat
    Jul 28, 2022 08:00 AM PDT
    “Nothing in biology makes sense except in the light of evolution,” the eminent geneticist Theodosius Dobzhansky wrote in 1973. That goes for the human diet. We are omnivores, not herbivores. Natural selection has formed us to eat both plant and animal foods and to like doing so. Chimpanzees, the primates that are genetically the closest to us, deliberately hunt, kill, and eat small monkeys, wild pigs, and tortoises, annually consuming 4 to 12 kilograms of meat per capita for the entire population and up to 25 kg per adult male; that is more than in many preindustrial farming societies. It is well to keep this biological fact in mind when considering outlandish claims about the imminent victory of veganism. We are told that “much of the world is trending towards plant-based eating,” and it is expected that the global demand for that diet will nearly quintuple between 2016 and 2026. Are we in fact seeing a revolutionary change in behavior? Half a century is surely plenty of time to discern a trend, and the United Nations Food and Agriculture Organization has the relevant data. The world’s production of meat and poultry reached about 100 million tonnes in 1970, 233 million tonnes in 2000, and 325 million tonnes in 2020. That represents a tripling since 1970. Even after accounting for the intervening population growth, per capita meat consumption rose by 55 percent during the 50 years. This was as you would expect, because as people get richer they can buy more of the food they really want. Since 1970, there has been a 55 percent increase in worldwide average per capita meat consumption. There have been many variations, arising from differences in religion, incomes, and shifting tastes. Of all the populous nations, only Bangladesh, India, Ethiopia, and Nigeria continue to eat very little meat. In 2020, average supply rates in India and Bangladesh were still below 5 kg of carcass weight per year, per capita—a bit less than in Ethiopia. But in most of the world’s populous countries per capita meat supply has increased spectacularly during the past 50 years: In Pakistan it has doubled (still only to 16 kg); in Turkey and the Philippines, the rate has more than doubled (in both countries to nearly 40 kg); it has tripled in Egypt (to about 30 kg); Brazil’s supply has more than tripled, to 100 kg; and in China it rose more than sevenfold, from only about 9 to just over 60 kg. Not surprisingly, meat consumption has changed little in highly carnivorous countries, including Canada, Italy, and the United Kingdom, and it has declined a bit in Denmark, France, and Germany. This small decline does constitute a trend, having to do with the avoidance of fatty red meat by many younger consumers, higher intakes of seafood, and the conversion of very small numbers of people to largely vegetarian (if not entirely vegan) diets. This moderation is indeed a welcome shift, because the nutritional benefits of meat are not predicated on consuming it in large amounts. Yet even in those rich countries in which the consumption of meat has reached new heights, such as Australia, Brazil, Canada, and the United States, it has led to no demonstrable ill effects on health. Spain is the best example: Since 1975, its average meat supply has more than doubled, peaking at 120 kg in 2002 before dropping back to today’s 100 kg. This rise in meat demand was accompanied by a decline in deaths from cardiovascular disease. In 2019, before COVID could affect survival rates, Spain had a life expectancy at birth (for males and females combined) of 84 years. That number is the highest in the European Union—notwithstanding all that carne de cerdo asada, jamon, and chorizo… This article appears in the August 2022 print issue as “Meat-Eating Is as Human as Apple Pie.”
  • Turing Award Winner On His Pioneering Algorithms
    Jul 27, 2022 11:00 AM PDT
    Jack Dongarra’s dream job growing up was to teach science at a public high school in Chicago. “I was pretty good in math and science, but I wasn’t a particularly good student,” Dongarra says, laughing. After he graduated high school, there was only one university he wanted to attend: Chicago State. That’s because, he says, it was known for “churning out teachers.” Chicago State accepted his application, and he decided to major in mathematics. His physics professor suggested that Dongarra apply for an internship at the Argonne National Laboratory, in Lemont, Ill., a nearby U.S. Department of Energy science and engineering research center. For 16 weeks he worked with a group of researchers designing and developing EISPACK, a package of Fortran routines that compute the eigenvalues and eigenvectors of matrices—calculations common in scientific computing. Dongarra acknowledges he didn’t have a background in or knowledge of eigenvalues and eigenvectors—or of linear algebra—but he loved what he was doing. The experience at Argonne, he says, was transformative. He had found his passion. “I thought it was a cool thing to do,” he says, “so I kept pursuing it.” About Jack Dongarra Employer: University of Tennessee, Knoxville Title: Professor emeritus, computer science Member grade: Life Fellow Alma mater: Chicago State University The IEEE Life Fellow has since made pioneering contributions to numerical algorithms and libraries for linear algebra, which allowed software to make good use of high-performance hardware. His open-source software libraries are used in just about every computer, from laptops to the world’s fastest supercomputers. The libraries include basic linear algebra subprograms (BLAS), the linear-algebra package LAPACK, parallel virtual machines (PVMs), automatically tuned linear algebra software (ATLAS), and the high-performance conjugate gradient (HPCG) benchmark. For his work, he was honored this year with the 2021 A.M. Turing Award from the Association for Computing Machinery. He received US $1 million as part of the award, which is known as the Nobel Prize of computing. “When I think about previous Turing Award recipients, I’m humbled to think about what I’ve learned from their books and papers,” Dongarra says. “Their programming languages, theorems, techniques, and standards have helped me develop my algorithms. “It’s a tremendous honor to be this year’s recipient. The award is a recognition by the computer-science community that the contributions we are making in high-performance computing are important and have an impact in the broader computer-science community and science in general.” Dongarra didn’t end up teaching science to high school students. Instead, he became a professor of electrical engineering and computer science at the University of Tennessee in Knoxville, where he taught for 33 years. The university recently named him professor emeritus. Entrepreneurial Spirit After graduating from Chicago State in 1972 with a bachelor’s degree in mathematics, Dongarra went on to pursue a master’s degree in computer science at the Illinois Institute of Technology, also in Chicago. While there he worked one day a week for Argonne with the same team of researchers. After he got his degree in 1973, the lab hired him full time as a researcher. With encouragement from his colleagues to pursue a Ph.D., he left the lab to study applied mathematics at the University of New Mexico in Albuquerque. He honed his knowledge of linear algebra there and began working out algorithms and writing software. He returned to Argonne after getting his doctorate in 1980 and worked there as a senior scientist until 1989, when he got the opportunity to fulfill his dream of teaching. He was offered a joint position teaching computer science at the University of Tennessee and conducting research at the nearby Oak Ridge National Laboratory which, like Argonne, is a Department of Energy facility. “It was time for me to try out some new things,” he says. “I was ready to try my hand at academia.” He says Oak Ridge operated in a similar way to Argonne, and the culture there was more or less the same. “The challenge,” he says, “was becoming a university professor.” "The Turing Award is a recognition by the computer science community that the contributions we are making in high-performance computing are important, and have an impact in the broader computer science community and science in general.” University culture is very different from that at a government laboratory, he says, but he quickly fell into the rhythm of the academic setting. Although he loved teaching, he says, he also was attracted to the opportunity the university gave its instructors to work on technology they are passionate about. “You follow your own path and course of research,” he says. “I’ve prospered in that environment. I interact with smart people, I have the ability to travel around the world, and I have collaborations going on with people in many countries. “Academia gives you this freedom to do things and not be constrained by a company’s drive or its motivation. Rather, I get to work on what motivates me. That’s why I’ve stayed in academia for so many years.” In 1980, Dongarra worked as a senior scientist at Argonne National Laboratory, in Lemont, Ill.Jack Dongarra Dongarra founded the university’s Innovative Computing Laboratory, whose mission is to provide tools for high-performance computing to the scientific community. He also directs the school’s Center for Information Technology Research. He is now a distinguished researcher at Oak Ridge, which he calls “a wonderful place, with its state-of-the-art equipment and the latest computers.” Software for Supercomputers It was working in creative environments that led Dongarra to come up with what many describe as world-changing software libraries, which have contributed to the growth of high-performance computing in many areas including artificial intelligence, data analytics, genomics, and health care. “The libraries we designed have basic components that are needed in many areas of science so that users can draw on those components to help them solve their computational problems,” he says. “That software is portable and efficient. It has all the attributes that we want in terms of being understandable and providing reliable results.” He’s currently working on creating a software library for the world’s fastest supercomputer, Frontier, which recently was installed at the Oak Ridge lab. It is the first computer that can process more than 1 quintillion operations per second. Computer-Science Recognition Dongarra has been an IEEE member for more than 30 years. “I enjoy interacting with the community,” he says in explaining why he continues to belong. “Also I enjoy reading IEEE Spectrum and journals that are relevant to my specific field.” He has served as an editor for several IEEE journals including Proceedings of the IEEE, IEEE Computer Architecture Letters, and IEEE Transactions on Parallel and Distributed Systems. Dongarra says he’s a big promoter of IEEE meetings and workshops, especially the International Conference for High Performance Computing, Networking, Storage, and Analysis, sponsored by ACM and the IEEE Computer Society, of which he is a member. He’s been attending the event every year since 1988. He has won many awards at the conference for his papers. “That conference is really a homecoming for the high-performance computing community,” he says, “and IEEE plays a major role.” IEEE is proud of Dongarra’s contributions to computing and has honored him over the years. In 2008 he received the first IEEE Medal of Excellence in Scalable Computing. He also received the 2020 Computer Pioneer Award, the 2013 ACM/IEEE Ken Kennedy Award, the 2011 IEEE Charles Babbage Award and the 2003 Sidney Fernbach Award. “I’m very happy and proud to be a member of IEEE,” he says. “I think it provides a valuable service to the community.”
  • Necrobotics: Dead Spiders Reincarnated as Robot Grippers
    Jul 26, 2022 02:34 PM PDT
    Bugs have long taunted roboticists with how utterly incredible they are. Astonishingly mobile, amazingly efficient, super robust, and in some cases, literally dirt cheap. But making a robot that’s an insect equivalent is extremely hard—so hard that it’s frequently easier to just hijack living insects themselves and put them to work for us. You know what’s even easier than that, though? Hijacking and repurposing dead bugs. Welcome to necrobotics. Spiders are basically hydraulic (or pneumatic) grippers. Living spiders control their limbs by adjusting blood pressure on a limb-by-limb basis through an internal valve system. Higher pressure extends the limb, acting against an antagonistic flexor muscle that curls the limb when the blood pressure within is reduced. This, incidentally, is why spider legs all curl up when the spider shuffles off the mortal coil: There’s a lack of blood pressure to balance the force of the flexors. This means that actuating all eight limbs of a spider that has joined the choir invisible is relatively straightforward. Simply stab it in the middle of that valve system, inject some air, and poof, all of the legs inflate and straighten. Our strategy contrasts with bioinspired approaches in which researchers look to the spider’s physical morphology for design ideas that are subsequently implemented in complex engineered systems, and also differs from biohybrid systems in which live or active biological materials serve as the basis for a system, demanding careful and precise maintenance. We repurposed the cadaver of a spider to create a pneumatically actuated gripper that is fully functional following only one simple assembly step, allowing us to circumvent the usual tedious and constraining fabrication steps required for fluidically driven actuators and grippers This work, from researchers at the Preston Innovation Lab at Rice University, in Houston, is described in a paper just published in Advanced Science. In the paper, the team does a little bit of characterization of the performance of the deceased-spider gripper, and it’s impressive: It can lift 1.3 times its own weight, exert a peak gripping force of 0.35 millinewton, and can actuate at least 700 times before the limbs or the valve system start to degrade in any significant way. After 1,000 cycles, some cracks appear in the dead spider’s joins, likely because of dehydration. But the researchers think that by coating the spider in something like beeswax, they could likely forestall this breakdown a great deal. The demised-spider gripper is able to successfully pick up a variety of objects, likely because of a combination of the inherent compliance of the legs as well as hairlike microstructures on the legs that work kind of like a directional adhesive. We are, unfortunately (although somewhat obviously), unable to say that no spiders were harmed over the course of this research. According to the paper, “the raw biotic material (i.e., the spider cadaver) was obtained by euthanizing a wolf spider through exposure to freezing temperature (approximately -4 °C) for a period of 5–7 days.” The researchers note that “there are currently no clear guidelines in the literature regarding ethical sourcing and humane euthanasia of spiders,” which is really something that should be figured out, considering how much we know about the cute-but-still-terrifying personalities some spiders have. The wolf spider was a convenient choice because it exerts a gripping force approximately equal to its own weight, which raises the interesting question of what kind of performance could be expected from spiders of different sizes. Based on a scaling analysis, the researchers suggest that itty-bitty 10-milligram jumping spiders could exert a gripping force exceeding 200 percent of their body weight, while very much not itty-bitty 200-gram goliath spiders may only be able to grasp with a force that is 10 percent of their body weight. But that works out to 20 grams, which is still kind of terrifying. Goliath spiders are big. For better or worse, insects seem likely to offer the most necrobotic potential, because fabricating pneumatics and joints and muscles at that scale can be very challenging, if not impossible. And spiders (as well as other spiderlike insects) in particular offer biodegradable, eco-friendly on-demand actuation with capabilities that the researchers hope to extend significantly. A capacitive proximity sensor could enable autonomy, for example, to “discreetly capture small biological creatures for sample collection in real-world scenarios.” Independent actuation of limbs could result in necrobotic locomotion. And the researchers are also planning to explore high-speed articulation with whip scorpions as well as true microscale manipulation with Patu digua spiders. I’ll let you google whip scorpion on your own because they kind of freak me out, but here’s a picture of a Patu digua, with a body measuring about a quarter of a millimeter: Squee!
  • Micron Is First to Deliver 3D Flash Chips With More Than 200 Layers
    Jul 26, 2022 11:00 AM PDT
    Boise, Idaho–based memory manufacturer Micron Technology says it has reached volume production of a 232-layer NAND flash-memory chip. It’s the first such chip to pass the 200-layer mark, and it’s been a tight race. Competitors are currently providing 176-layer technology, and some already have working chips in hand. The new Micron tech as much as doubles the density of bits stored per unit area versus competing chips, packing in 14.6 gigabits per square millimeter. Its 1-terabit chips are bundled into 2-terabyte packages, each of which is barely more than a centimeter on a side and can store about two weeks worth of 4K video. With 81 trillion gigabytes (81 zettabytes) of data generated in 2021 and International Data Corp. (IDC) predicting 221 ZB in 2026, “storage has to innovate to keep up,” says Alvaro Toledo, Micron’s vice president of data-center storage. The move to 223 layers is a combination and extension of many technologies Micron has already deployed. To get a handle on them, you need to know the basic structure and function of 3D NAND flash. The chip itself is made up of a bottom layer of CMOS logic and other circuitry that’s responsible for controlling reading and writing operations and getting data on and off the chip as quickly and efficiently as possible. Improvements to this layer, such as optimizing the path data travels and reducing the capacitance of the chip’s inputs and outputs, yielded a 50 percent improvement in the data transfer rate to 2.4 Gb/s. Above the CMOS are layers upon layers of NAND flash cells. Unlike other devices, Flash-memory cells are built vertically. They start as a (relatively) deep, narrow hole etched through alternating layers of conductor and insulator. Then the holes are filled with material and processed to form the bit-storing part of the device. It’s the ability to reliably etch and fill the holes through all those layers that’s a key limit to the technology. Instead of etching through all 232 layers in one go, Micron’s process builds them in two parts and stacks one atop the other. Even so, “it’s an astounding engineering feat,” says Alvaro. “That was one of the biggest challenges we overcame.” According to Toledo, there is a path toward even more layers in future NAND chips. “There are definitely challenges,” he says. But “we haven’t seen the end of that path.” Competitors are hot on Micron's heels. SK Hynix says it is shipping samples of a 238-layer TLC product that will be in full production in 2023. Samsung says it has working chips with more than 200-layers, but it hasn't detailed when these will go into full production. In addition to adding more and more layers, NAND flash makers have been increasing the density of stored bits by packing multiple bits into a single device. Each of the Micron chip’s memory cells is capable of storing three bits per cell. That is, the charge stored in each cell produces a distinct enough effect to discern eight different states. Though 3-bit-per-cell products (called TLC) are the majority, four-bit products (called QLC) are also available. One QLC chip presented by Western Digital researchers at the IEEE International Solid State Circuits Conference earlier this year achieved a 15 Gb/mm2 areal density in a 162-layer chip. And Kioxia engineers reported 5-bit cells last month at the IEEE Symposium on VLSI Technology and Circuits. There has even been a 7-bit cell demonstrated, but it required dunking the chip in 77-kelvin liquid nitrogen. This post was updated on 2 August 2022 to clarify the state of SK Hynix's and Samsung's plans.
  • Weave Your Own Apollo-Era Memory
    Jul 26, 2022 08:00 AM PDT
    The spacecraft that took men to the surface of the moon and back relied on computers that pushed the state of the art when they were built. Designed by MIT, the Apollo Guidance Computers came with 72 kilobytes of ROM and 4 kilobytes of RAM. This memory used a form of magnetic core memory, where multiple hair-thin wires passed through tiny ferrite toroids to store 1s and 0s. The work of assembling the AGC memories fell to women at Raytheon who formerly worked with textiles, and once you complete the Core64 kit, you will have a newfound respect for their skills. As the name suggests, the US $180 Core64 kit provides just 64 bits of RAM, but you can get quite a bit more mileage out of it than just the ability to store up to eight extended ASCII characters. I picked up a beta version of the current kit at the last Vintage Computer Festival East from its creator, Andy Geppert, in part because I was impressed at how expandable he’s made it. There are multiple ways to integrate the kit into larger projects using interfaces such as I2C or USB, or even by connecting directly to the memory lines themselves. You can also use the kit to detect magnetic flux lines, and read the resulting bits that get set. I was able to use it in this way to generate random numbers using the included magnetic stylus, although you’d have to take care to create a setup that eliminates any geometric bias if you want to use those numbers for cryptographic purposes. Reading a bit from magnetic core memory is destructive—to do a read operation, the computer first tries to clear the bit in question by sending a current through the corresponding vertical and horizontal driver wires that pass through the ferrite cores. Flipping a bit causes a voltage to be induced in the “sense” wire that is threaded through all the cores. So if you detect a voltage, it means that the bit was originally a 1, while no voltage means the bit was a 0 all along. If a 1 is detected, the system must then set that bit back to 1 with a reverse pulse of current on the driver lines. The Core64 kit comes with everything you need to create a self-contained memory, but you can expand it using a number of hardware interfaces, or replace the AA battery pack with a rechargeable battery.James Provost This may seem like a slow and cumbersome process, but magnetic core memories were an enormous leap forward for computers. They were reliable and allowed true random access to data, unlike the previous generation of digital memory, typically some form of delay-line system, in which all the bits circulated one by one through the memory, so you had to wait till the bit you wanted came round to set it or read it. In the Core64, the job of reading, writing, and rewriting the cores as needed falls to a Teensy 3.2 microcontroller, which also provides the USB interface and drives an array of RGB-addressable LEDs mounted behind the core memory. Normally, these LEDs indicate the state of the corresponding core, but they can also be used to display scrolling text or simple animations. In fact, thanks to the included lanyard and protective cover sheets, you can wear the entire kit as a badge. (If you’re wondering why you might want to do that, then welcome to the “badgelife” subculture that started with hacker gatherings such as Def Con and HOPE. In a nutshell, think about tricking out a car, but with something light enough to wear around your neck.) Assembling the logic board the Teensy plugs into is pretty straightforward, although the requirement to cut a tiny power trace on the Teensy does require a careful touch, as does soldering a surface mount connector to the microcontroller. However, this is just a preview of the delicate work to come. If you’ve ever impulsively added a nice head-mounted magnifier or precision tweezers to your Digikey or Mouser order, just in case—well, their time has come. Unless you have truly preternatural vision, threading the cores—which are about 1 millimeter in diameter—can’t really be done without magnification, and tweezers are essential for manipulating the very fine wire involved. For me the hardest part of the operation was the second step of the memory-weaving operation, when eight ferrite cores have to be placed on each of eight wires, abacus style. It’s very easy to drop a core, and if one escapes your work area, you’ll never find it. I ended up creating a corral of Dungeons & Dragons terrain to contain errant cores from bouncing away, and so managed to only lose two—fortunately four cores are provided as spares. Flipping the magnetic polarity of a ferrite core to represent either a 1 or 0 requires more current than can be delivered by a single horizontal [blue] or vertical [red] drive wire alone. A core at the intersection of two active wires can be flipped, however, and this flipping induces a voltage in James Provost The rest of the job of weaving the cores, sense, and driver lines together gets progressively easier, but take kit creator Geppert’s advice and don’t try to do it all in one sitting. It’s a job that’s best spread out an hour or so at a time. And don’t be tempted to straighten out and tension all the wires too early—the cores have to be placed at alternating angles, and you will inevitably place one the wrong way around, forcing you to undo and redo your work. However, Geppert has provided excellent instructions, along with videos of the weaving process, so it’s just a matter of being methodical. The only nit I have is that the documentation describing the firmware and the various modes the kit can be put into is minimal, but as its source code is available on GitHub and well commented, you can skim it if you’ve any questions. So build and enjoy your old-fangled solid-state memory—after all, 64 bits should be enough for anyone. This article appears in the August 2022 print issue as “Weave Your Own Memory.”
  • Andrew Ng: Unbiggen AI
    Feb 09, 2022 07:31 AM PST
    Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A. Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias. Andrew Ng on... What’s next for really big models The career advice he didn’t listen to Defining the data-centric AI movement Synthetic data Why Landing AI asks its customers to do the work The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way? Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions. When you say you want a foundation model for computer vision, what do you mean by that? Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them. What needs to happen for someone to build a foundation model for video? Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision. Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries. Back to top It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users. Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation. “In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.” —Andrew Ng, CEO & Founder, Landing AI I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince. I expect they’re both convinced now. Ng: I think so, yes. Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.” Back to top How do you define data-centric AI, and why do you consider it a movement? Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data. When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline. The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up. You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them? Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn. When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set? Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system. “Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.” —Andrew Ng For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance. Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training? Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle. One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way. When you talk about engineering the data, what do you mean exactly? Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity. For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow. Back to top What about using synthetic data, is that often a good solution? Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development. Do you mean that synthetic data would allow you to try the model on more data sets? Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category. “In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.” —Andrew Ng Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data. Back to top To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment? Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data. One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory. How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up? Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations. In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists? So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work. Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains. Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement? Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it. Back to top This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”
  • How AI Will Change Chip Design
    Feb 08, 2022 06:00 AM PST
    The end of Moore’s Law is looming. Engineers and designers can do only so much to miniaturize transistors and pack as many of them as possible into chips. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process. Samsung, for instance, is adding AI to its memory chips to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has doubled its processing power compared with that of its previous version. But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with Heather Gorr, senior product manager for MathWorks’ MATLAB platform. How is AI currently being used to design the next generation of chips? Heather Gorr: AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider. Heather GorrMathWorks Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI. What are the benefits of using AI for chip design? Gorr: Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a reduced order model, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your Monte Carlo simulations using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design. So it’s like having a digital twin in a sense? Gorr: Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end. So, it’s going to be more efficient and, as you said, cheaper? Gorr: Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering. We’ve talked about the benefits. How about the drawbacks? Gorr: The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it's not going to be as accurate as that precise model that we’ve developed over the years. Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It's a case where you might have models to predict something and different parts of it, but you still need to bring it all together. One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge. How can engineers use AI to better prepare and extract insights from hardware or sensor data? Gorr: We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start. One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on GitHub or MATLAB Central, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI. What should engineers and designers consider when using AI for chip design? Gorr: Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team. How do you think AI will affect chip designers’ jobs? Gorr: It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip. How do you envision the future of AI and chip design? Gorr: It's very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.
  • Atomically Thin Materials Significantly Shrink Qubits
    Feb 07, 2022 08:12 AM PST
    Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality. IBM has adopted the superconducting qubit road map of reaching a 1,121-qubit processor by 2023, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability. Now researchers at MIT have been able to both reduce the size of the qubits and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100. “We are addressing both qubit miniaturization and quality,” said William Oliver, the director for the Center for Quantum Engineering at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.” The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit. Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C). Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.Nathan Fiske/MIT In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another. As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance. In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates. “We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author Joel Wang, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas. While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor. “What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.” This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits. “The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang. Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.

Engineering on Twitter