Engineering Community Portal

MERLOT Engineering
Share

Welcome  – From the Editor

Welcome to the Engineering Portal on MERLOT. Here, you will find lots of resources on a wide variety of topics ranging from aerospace engineering to petroleum engineering to help you with your teaching and research.

As you scroll this page, you will find many Engineering resources.  This includes the most recently added Engineering material and members; journals and publications and Engineering education alerts and twitter feeds.  

Showcase

Over 150 emeddable or downloadable 3D Simulations in the subject area of Automation, Electro/Mechanical, Process Control, and Renewable Energy.  Short 3-7 minute simulations that cover a range of engineering topics to help students understand conceptual engineering topics. 

Each video is hosted on Vimeo and can be played, embedded, or downloaded for use in the classroom or online.  Other option includes an embeddable HTML player created in Storyline with review questions for each simulation that reinforce the concepts learned. 

Made possible under a Department of Labor grant.  Extensive storyboard/ scripting work with instructors and industry experts to ensure content is accurate and up to date. 

Engineering Technology 3D Simulations in MERLOT

New Materials

New Members

Engineering on the Web

  • World-Record Data Transmission Speed Smashed
    Aug 02, 2021 07:30 AM PDT
    Researchers at Japan's National Institute of Information and Communications Technology (NICT) in Tokyo have almost doubled the previous long-haul data transmission speed record of 172 Tb/s established by NICT and others in April 2020. The researchers recently presented their results at the International Conference on Optical Fiber Communications. In breaking the record, they used a variety of technologies and techniques still to become mainstream: special low-loss 4-core spatial division multiplexing (SDM) fiber employed in research projects, erbium and thulium doped-fiber amplifiers, distributed Raman amplification, and, in addition to utilizing the C-band and L-band transmission wavelengths, they used the S-band wavelength. Until now, S-band usage has been limited to lab tests conducted over just a few tens of kilometers in research projects. But perhaps of most significance is the claimed high transmission quality in the 4-core fiber that maintains the same outer diameter—0.125 nm—of glass cladding used in standard single-mode fiber. "Because our SDM fiber has the same cladding as standard single-mode fibers, it can be compatible with the same cabling technology currently in use and makes early adoption more likely," says Ben Puttnam, a senior researcher at NICT and leader of the record-breaking project team. There are other benefits, too, Puttnam notes. "Keeping the same diameter is also important because the mechanical properties and failure probabilities are well understood. Exactly how bending and twisting of larger fibers may affect their properties is not fully known." In the past, he says they have explored some research fibers with cladding diameters almost 3x larger and could achieve transmission rates of over 10 petabits a second. "But these fibers are hard to handle and sometimes snap like dry spaghetti." He adds that larger diameter fibers are also harder to make in long span lengths and the likelihood of splicing errors increase when fibers are joined together. The experimental setup in the NICT lab comprises a recirculating transmission loop to achieve a distance of 3,001 km. Wavelength division multiplexing (WDM) of 552, 25-GHz spaced channels generated from a comb source and tunable lasers are used to carry the data. And to double the amount of information carried before launching a 120 nm signal into each of the four cores of the fiber, dual-polarization modulators were employed. At 69.8 km intervals along the fiber, loss is compensated for by two kinds of amplifiers—one doped with erbium, the other with thulium—to boost the signals in the C/L bands and in the S bands, respectively. In addition, Raman pump amplifiers provide gain along the transmission fiber, preventing the signal power from decaying excessively. This leads to less noise when the signal is amplified and improves overall performance. Schematic diagram of NICT's transmission system NICT The decoded data rates across the S, C, and L bandwidths are: 102.5 Tb/s (S), 108.7 Tb/s (C) and 107.7 Tb/s (L). "Now we are working to increase the transmission distance," says Puttnam. "We've already measured channels at a distance of 8,000 kilometers and want to push on to at least 10,000 kilometers by better optimizing the gain flattening." As for transmission rates, "Over short distant spans of 50 to 70 kilometers, I think we could eventually transmit over 1 petabit a second in this fiber." And once the technology has been optimized, and provided SDM fiber is shown to be practically and economically manufacturable with the same cladding as standard single-mode fiber, what then? Puttnam sees one obvious user for the technology would be operators of trans-ocean submarine cables, where space is at a premium. Another likely customer would be large data centers, where high-density connectors can be crucial and new fibers are routinely added. The potential bandwidth of SDM fibers would also be attractive in terrestrial fiber networks, but the costly business of fiber deployment somewhat complicates the issue. After deployment, Puttnam expects applications like high-resolution video streaming, online gaming and IoT communications to be some of the applications eating up the additional bandwidth, as will be the advent of 6G in a decade's time.
  • Here's how women engineers are trying to level the playing field
    Aug 02, 2021 06:06 AM PDT
  • Hypersonic, autonomous flight research bolstered by $1.5 million grant
    Aug 02, 2021 05:44 AM PDT
  • Markforged Names John Howard Vice President, Engineering
    Aug 02, 2021 05:22 AM PDT
  • Engineering and Construction Costs Rise Again in July on Semiconductor Shortage
    Aug 02, 2021 05:22 AM PDT
  • Orbital Energy Group's Subsidiary, Gibson Technical Services, Acquires Privately Owned ...
    Aug 02, 2021 05:00 AM PDT
  • Austin Engineering's strategic review identifies innovation, technology opportunities
    Aug 02, 2021 04:48 AM PDT
  • Riverside City Council to Consider Engineering Contracts, Community Center Rendering Monday
    Aug 02, 2021 04:26 AM PDT
  • GR Engineering to help double processing capacity at NSR's Thunderbox
    Aug 02, 2021 04:03 AM PDT
  • Engineering Biologics toward Challenging Membrane Protein Targets
    Aug 02, 2021 03:51 AM PDT
  • Engineering student dies in accident in Hyderabad
    Aug 02, 2021 03:41 AM PDT
  • Is Jacobs Engineering Group (NYSE:J) A Risky Investment?
    Aug 02, 2021 03:39 AM PDT
  • Study for commercial-scale hydrogen imports
    Aug 02, 2021 03:37 AM PDT
  • Engineering company fined after employee scalped
    Aug 02, 2021 03:18 AM PDT
  • Commercialization of Petroteq's Technology Has Been Verified by Reputable Third-Party ...
    Aug 02, 2021 02:56 AM PDT
  • 12 Companies Win Spots on State Department's Worldwide Architectural, Engineering Support IDIQ
    Aug 02, 2021 02:33 AM PDT
  • Navy Picks 7 Small Businesses for $250M Systems Engineering, Technical Support IDIQ
    Aug 02, 2021 02:22 AM PDT
  • Boats Should Be Sleek—But Only Up to a Point
    Jul 30, 2021 12:00 PM PDT
    In comparison with Moore's Law, the nonsilicon world's progress can seem rather glacial. Indeed, some designs made of wood or metal came up against their functional limits generations ago The length-to-beam ratio (LBR) of large oceangoing vessels offers an excellent example of such technological maturity. This ratio is simply the quotient of a ship's length and breadth, both measured at the waterline; you can think of it simply as the expression of a vessel's sleekness. A high LBR favors speed but restricts maneuverability as well as cargo hold and cabin design. These considerations, together with the properties of shipbuilders' materials, have limited the LBR ratio of large vessels to single digits. If all you have is a rough wickerwork over which you stretch thick animal skins, you get a man-size, circular or slightly oval coracle—a riverboat or lake boat that has been used since antiquity from Wales to Tibet. Such a craft has an LBR close to 1, so it's no vessel for crossing an ocean, but in 1974 an adventurer did paddle one across the English Channel. THEY JUST KEEP GETTING BIGGER In each successive era, the biggest ships have gotten even bigger, but the length-to-beam ratio rose only up to a certain point. Narrower designs incur less resistance and are thus faster, but the requirements of seaworthiness and of cargo capacity have set limits on how far the slimming can go. John MacNeill Building with wood allows for sleeker designs, but only up to a point. The LBR of ancient and medieval commercial wooden sailing ships increased slowly. Roman vessels transporting wheat from Egypt to Italy had an LBR of about 3; ratios of 3.4 to 4.5 were typical for Viking ships, whose lower freeboard—the distance between the waterline and the main deck of a ship—and much smaller carrying capacity made them even less comfortable The Santa María, a small carrack captained by Christopher Columbus in 1492, had an LBR of 3.45. With high prows and poops, some small carracks had a nearly semicircular profile. Caravels, used on the European voyages of discovery during the following two centuries, had similar dimensions, but multidecked galleons were sleeker: The Golden Hind, which Francis Drake used to circumnavigate Earth between 1577 and 1580, had an LBR of 5.1. Little changed over the following 250 years. Packet sailing ships, the mainstays of European emigration to the United States before the Civil War, had an LBR of less than 4. In 1851, Donald McKay crowned his career designing sleek clippers by launching the Flying Cloud, whose LBR of 5.4 had reached the practical limit of nonreinforced wood; beyond that ratio, the hulls would simply break. A high LBR favors speed but restricts maneuverability as well as cargo hold and cabin design. These considerations, together with the properties of shipbuilders' materials, have limited the ratio of large vessels to single digits. But by that time wooden hulls were on the way out. In 1845 the SS Great Britain (designed by Isambard Kingdom Brunel, at that time the country's most famous engineer) was the first iron vessel to cross the Atlantic—it had an LBR of 6.4. Then inexpensive steel became available (thanks to Bessemer process converters), inducing Lloyd's of London to accept its use as an insurable material in 1877. In 1881, the Concord Line's SS Servia, the first large trans-Atlantic steel-hulled liner, had an LBR of 9.9. Dimensions of future steel liners clustered close around that ratio: 9.6, for the RMS Titanic (launched in 1912); 9.3, for the SS United States (1951); and 8.9 for the SS France (1960, two years after the Boeing 707 began the rapid elimination of trans-Atlantic passenger ships). Huge container ships, today's most important commercial vessels, have relatively low LBRs in order to accommodate packed rows of standard steel container units. The MSC Gülsün (launched in 2019) the world's largest, with a capacity of 23,756 container units, is 1,312 feet (399.9 meters) long and 202 feet (61.5 meters) wide; hence its LBR is only 6.5. The Symphony of the Seas (2018), the world's largest cruise ship, is only about 10 percent shorter, but its narrower beam gives it an LBR of 7.6. Of course, there are much sleeker vessels around, but they are designed for speed, not to carry massive loads of goods or passengers. Each demi-hull of a catamaran has an LBR of about 10 to 12, and in a trimaran, whose center hull has no inherent stability (that feature is supplied by the outriggers), the LBR can exceed 17. This article appears in the August 2021 print issue as "A Boat Can Indeed Be Too Long and Too Skinny."
  • Insulator-Conductor Transition Points Toward Ultra-Efficient Computing
    Jul 30, 2021 11:10 AM PDT
    For the first time, researchers have been able to image how atoms in a computer switch move around on fast timescales while it turns on and off. This ability to peer into the atomic world may hold the key to a new kind of switch for computers that will speed up computing and reduce the energy required for computer processing. The research team made up of scientists from the Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University were able to capture snapshots of atomic motion in a device while it was switching. The researchers believe that the new insights this technique will generate into how switches operate will not only improve future switch technology, but will also resolve the ultimate speed and energy-consumption limits for computing devices. Switches in computer chips control the flow of electrons. By applying an electrical charge to the switch and then removing that charge, the switch can be turned back and forth between acting as an insulator that blocks the flow of electrons to a conductor that allows the flow of electrons. This on/off switch is the basis for the "0-1" of binary computer logic. While studying a switch made from vanadium dioxide, the researchers were able to detect with their imaging technique the existence of a short-lived transition stage between the material going from an insulator to a conductor and then back again. "In this transient state, the structure remains the same as in the starting insulating state, but there is electronic reorganization which causes it to become metallic," explained Aditya Sood, a postdoctoral researcher at SLAC National Lab & Stanford University. "We infer this from subtle signatures in how the electron diffraction pattern changes during this electrically-driven transition." In order to observe this transient state, the researchers had to develop a real-time imaging technology based on electron diffraction. Electron diffraction by itself has existed for many decades and is used routinely in transmission electron microscopes (TEMs). But in these previous kinds of applications, electron imaging was used just to study a material's structure in a static way, or to probe its evolution on slow timescales. While ultrafast electron diffraction (UED) has been developed to make time-resolved measurements of atomic structure, previous implementations of this technique relied on optical pulses to impulsively excite (or "pump") materials and image the resulting atomic motions. What the scientists did here for the first time in this research was create an ultrafast technique in which electrical (not optical) pulses provide the impulsive excitation. This makes it possible to electrically pulse a device, look at the ensuing atomic scale motions on fast timescales (down to nanoseconds), while simultaneously measuring current through the device. The team used electrical pulses, shown here in blue, to turn their custom-made switches on and off several times. They timed these electrical pulses to arrive just before the electron pulses produced by SLAC's ultrafast electron diffraction source MeV-UED, which captured the atomic motions.Greg Stewart/SLAC National Accelerator Laboratory "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales," said Sood. To do this, the researchers built a new apparatus that integrated an electronic device to which they could apply fast electrical bias pulses, such that each electrical bias pulse was followed by a "probing" electron pulse (which creates a diffraction pattern, telling us about where the atoms are) with a controllable time delay. "By repeating this many times, each time changing the time delay, we could effectively construct a movie of the atomic movements during and after electrical biasing," explained Sood. Additionally, the researchers built an electrical circuit around the device to be able to concurrently measure the current flowing through during the transient switching process. While custom-made vanadium-dioxide-based switches were fabricated for the sake of this research, Sood says that the technique could work on any kind of switch just as long as the switch is 100 nanometers or thinner to allow electrons to be transmitted through it. "It would be interesting to see if the multi-stage, transient switching phenomenon we observe in our vanadium-dioxide-based devices is found more broadly across the solid-state device landscape," said Sood. "We are thrilled by the prospect of looking at some of the emerging memory and logic technologies, where for the first time, we can visualize ultrafast atomic motions occurring during switching." Aaron Lindenberg, a professor in the Department of Materials Science and Engineering at Stanford and a collaborator with Sood on this work said, "More generally, this work also opens up new possibilities for using electric fields to synthesize and stabilize new materials with potentially useful functional properties." The group's research was published in a recent issue of the journal Science.
  • Learn How the President-Elect Candidates Plan to Improve IEEE
    Jul 30, 2021 09:14 AM PDT
    The four candidates share their plans for increasing student membership; expanding science, technology, engineering, and math education programs; and attracting more members from industry. The annual IEEE election process begins in August, so be sure to check your mailbox for your ballot. To help you choose the 2022 IEEE president-elect, The Institute is publishing the official biographies and position statements of the four candidates, as approved by the IEEE Board of Directors. The candidates are Life Fellow Thomas M. Coughlin, Life Senior Member Francis Grosz, Life Fellow Saifur Rahman, and Fellow S.K. Ramesh. Thomas M. Coughlin Life Fellow Thomas M. Coughlin Nominated by Petition Coughlin is founder and president of Coughlin Associates in San Jose, Calif., which provides market and technology analysis as well as data storage, memory technology, and business consulting services. This IEEE Life Fellow has more than 40 years of experience in the data storage industry and has been a consultant for 20 years. Before starting his own company, Coughlin held leadership positions in companies such as Micropolis, Syquest Technology, and Ampex. Coughlin publishes several industry reports including the Digital Storage Technology Newsletter, the Media and Entertainment Storage Report, and the Emerging Non-Volatile Memory Report. He is also the author of Digital Storage in Consumer Electronics: The Essential Guide, which is now in its second edition. Coughlin is a regular contributor on digital storage for the Forbes blog and other news outlets. He has held several leadership positions in IEEE including 2019 IEEE-USA president as well as chair of the IEEE New Initiatives Committee, the IEEE Public Visibility Committee, and IEEE Region 6. He is also an active member of the IEEE Santa Clara Valley Section and has been active with several societies, standards, and the Future Directions Committee. Coughlin, who is an IEEE-HKN member, is the recipient of the 2020 IEEE Member and Geographic Activities Leadership Award. He is on the Society of Motion Picture and Television Engineers' Conference Program Committee and has published articles in several of the organization's journals. Coughlin is also the founder of the Storage Networking Industry Association's Solid State Storage Initiative, the Storage Visions Conference, and the Creative Storage Conference. The events highlight the latest trends in digital storage and their applications. He was the general chair of the annual Flash Memory Summit Conference and Exhibition for 10 years. The event brings together storage industry engineers to network, learn about upcoming technologies, and meet with various organizations in the industry. CANDIDATE'S POSITION STATEMENT The COVID-19 pandemic impacted our members and IEEE operations. The lessons learned can help us improve IEEE's reach, relevance, and value. I believe that IEEE is a community that must engage our members. This should start at the section level, but the idea of community should pervade IEEE, helping us be inclusive, efficient, and effective. I feel strongly that IEEE should advance and promote its members, increase student membership and Young Professional retention, make senior member advancement easier, and find more ways to recognize our heroes. As IEEE president, I will work to increase our value to industry, encourage STEM careers and sustainable technologies, support lifelong learning, improve diversity, and promote IEEE membership. I will encourage collaboration and innovation across the organization, while working to increase our external impact and general public awareness, and advancing technology for the benefit of humanity. I support a member-led organization, open discussion and transparency within the IEEE, and believe that we must be a representative global organization. I believe that IEEE should be open to all technologists and that we should create a safe environment that supports our members to be their best selves. My leadership experience inside and outside of IEEE and my connections with multiple IEEE organizational units enables me to facilitate linking, partnering, and communicating across boundaries. I feel that IEEE should be at the forefront of advancing new technologies, creating timely standards, and influencing public policies that demonstrate the value of technology professionals to the world. Francis Grosz Life Senior Member Francis Grosz Nominated by the IEEE Board of Directors Grosz, who retired in 2012, designed systems for defense contractors Litton Data Systems, Omni Technologies, and the U.S. Naval Research Laboratory. He was granted two U.S. patents—one for a method of transmitting data through a ship's bulkhead, and the second for a NASA fiber-optic communication system for rocket engine testing. He was an assistant professor of engineering at the University of New Orleans for six years and an adjunct professor for two years. Grosz was also an adjunct engineering professor at Tulane University, also in New Orleans, for two years. Grosz has been an IEEE volunteer for more than 35 years, serving at the section, region, and institute levels. He has held almost all offices at the section level, including chair, secretary, and vice chair of the IEEE New Orleans Section. Grosz also has been a member of the IEEE Region 5 executive committee for 18 years. He served on the IEEE Board of Directors as the 2016–2017 Region 5 director and the 2019 vice president of IEEE Member and Geographic Activities (MGA). He was the 2017 chair of the audit committee and cochair of the 2019 ad hoc committee on member engagement, which included three subcommittees that examined member value and led MGA's efforts in realigning IEEE's regions. Grosz, a member of IEEE-HKN, has received several recognitions including an IEEE Third Millennium Medal, the 2008 IEEE Region 5 Outstanding Member Award, and a 1999 NASA Space Act Award, which recognizes a technical innovation of significant value to the agency's activities. An amateur radio operator, his call sign is K5FBG. CANDIDATE'S POSITION STATEMENT IEEE does many things well, and we must continue to support them. Should I become a Board member and president, I would focus on increasing support for local Organizational Units (OUs)—sections, chapters, affinity groups, and student branches—and on increasing industry engagement. I think there is a natural synergy here, and our greatest opportunity. I believe that the current Local Groups pilot program offers an opportunity for sections and chapters to simultaneously provide increased service and value to members, prospective members, and the public while further engaging local industry, especially smaller and mid-sized companies. The Technical Activities/MGA Ad Hoc Committee on Chapter Support is looking at ways to increase support for chapters, which is where many of our members find their value in IEEE, and we should support this. We also need to provide better tools to help local OUs provide more service and value to their members with less work. One particular need, especially for the smaller OUs, is a way to help them more efficiently arrange interesting meetings with engaging speakers. We have been working on industry engagement for some time. We have made progress with programs such as IEEE Collabratec and the IEEE Mobile App, and the Local Groups program should really help, but we need to do more. We must convince industry to value their engineers and their work more highly, and show them that partnering with IEEE can help both the companies and their employees. Finally, we must make the public more aware of the contributions of engineering to society. Saifur Rahman Life Fellow Saifur Rahman Nominated by Petition Rahman is a professor of electrical and computer engineering at Virginia Tech. He is the founding director of the Advanced Research Institute at the university, which provides faculty members access to research funding, government laboratories, and industry research centers. Rahman is also the founder and chairman of BEM Controls in McLean, Va., a software company that provides buildings with energy efficiency solutions. He served as chair of the U.S. National Science Foundation Advisory Committee for International Science and Engineering from 2010 to 2013. Rahman is the founding editor in chief of the IEEE Electrification Magazine and the IEEE Transactions on Sustainable Energy. He served as the 2018–2019 president of the IEEE Power & Energy Society (IEEE PES). While president, Rahman established the IEEE PES Corporate Engagement Program, which allows employers to receive IEEE benefits by paying their employees' IEEE membership dues. Rahman set up IEEE PES Chapters' Councils in Africa, China, India, and Latin America. These councils have empowered local leaders to initiate local programs. He also led the effort to establish the PES University, which offers courses, tutorials, and webinars to members. Rahman was also the 2006 chair of IEEE Publication Services and Products Board and a member of the IEEE Board of Directors. He is a Distinguished Lecturer for IEEE PES, and has given lectures in more than 30 countries on topics such as the smart grid, energy-efficient buildings, and sensor integration. Rahman has received several IEEE awards including the 2000 IEEE Millennium Medal for outstanding achievements and contributions to IEEE, the 2011 IEEE-USA Professional Achievement Award, the 2012 IEEE PES Meritorious Service Award, and the 2013 IEEE PES Outstanding Power Engineering Educator Award. CANDIDATE'S POSITION STATEMENT Over the past 40 plus years, IEEE has been an integral part of my pursuit of excellence in professional life. While speaking at more than 200 IEEE events, I have come across academics, young professionals, and mid-career engineers in industry and government including women and under-represented minorities. Such engagements at the grassroots level gives me better insights into understanding the community's needs and help advance their professional careers. I pledge to: Global Community Building: Highlight networking opportunities at IEEE events. Reach out proactively to women and underrepresented minorities. Ensure financial transparency, stability, and broader member benefits. Encourage and Recognize Member Engagement: Encourage technology professionals without a college degree to join IEEE. Recognize contributions our volunteers make through committee work and reviews. Provide resources to help members elevate to IEEE Senior Members and IEEE Fellows. Growth and Nurturing: Provide access to IEEE resources for up-skilling worldwide. Evolve PES University best practices as IEEE University Online. Focus on Industry Certification courses. Service to Humanity and Smart Engineering: Develop closer and purposeful ties with Industry. Engage Sustainable Development thought leaders to address global challenges. Work with policymakers to help with Smart Engineering. Intellectual Property Rights: Build IPR awareness Build an IEEE IPR skills framework Develop KPIs for IEEE sections on local IPR activities Partnership to Support Entrepreneurship: Work with Startup Incubators to harness Entrepreneurial potentials. Incubate Innovation Centers to nurture Maker competencies. Design IEEE Startup Show to highlight regional and local Innovations. Together we will make IEEE a more successful and resilient global technical organization. S.K. Ramesh Fellow S.K. Ramesh Nominated by the IEEE Board of Directors Ramesh is a professor of electrical and computer engineering at California State University Northridge's college of engineering and computer science, where he served as dean from 2006 to 2017. While dean, he established centers on renewable energy, entrepreneurship, and advanced manufacturing. He created an interdisciplinary master's degree program in assistive technology engineering to meet emerging workforce needs. Ramesh is the founding director of the university's nationally recognized Attract, Inspire, Mentor, and Support Students program, which advances the graduation of underrepresented minorities in engineering and computer science. He has been an IEEE volunteer for almost 40 years and has served on the IEEE Board of Directors, Awards Board, Educational Activities Board, Publication Services and Products Board, and Fellows Committee. As the 2016–2017 vice president of IEEE Educational Activities, he championed several successful programs including the IEEE Learning Network and the IEEE TryEngineering Summer Institute. He expanded chapters of IEEE's honor society, Eta Kappa Nu ( IEEE-HKN), globally to serve all 10 regions, and he increased industry support as the society's 2016 president. Ramesh was elevated to IEEE Fellow in 2015 for "contributions to entrepreneurship in engineering education." He serves on the board of ABET, the global accrediting organization for academic programs in applied science, computing, engineering, and technology, and was elected as 2021 president elect. Ramesh has served IEEE Region 6 at the section, chapter, and area levels. He currently serves on the IEEE Buenaventura (California) Section member development team, which received a 2020 Gold Award for its work. His many recognitions include the 2004 IEEE Region 6 Community Service Award and the 2012 John J. Guarrera Engineering Educator of the Year Award from the Engineers' Council. CANDIDATE'S POSITION STATEMENT IEEE has been an integral part of my life for almost four decades—from student member to an engaged volunteer today. My IEEE experiences have taught me some timeless values: To be Inclusive, Collaborative, Accountable, Resilient, and Ethical. Simply put, "I CARE." These values and IEEE's mission are especially relevant today, as we adapt and change to serve our members globally, and overcome the challenges from the pandemic. My top priority is to deliver an exceptional experience to every member. If elected, I will focus on three strategic areas: Member Engagement: Expand and offer affordable and accessible continuing education programs through the IEEE Learning Network (ILN), and the IEEE Academy. Strengthen participation of Women in Engineering (WIE), Young Professionals (YPs), Students, Life Members, and Entrepreneurs. Volunteer Engagement: Nurture and support IEEE's volunteer leaders to transform IEEE globally through a volunteer academy program that strengthens collaboration and inclusion. Establish strong relationships between IEEE volunteers and key industry sector leaders to increase awareness of IEEE. Industry Engagement: Increase the value of conferences, publications, and standards to make them more relevant to practicing engineers. Focus on innovation and sustainable development as we look ahead to hybrid/virtual conferences and open access publications. I will empower the IEEE leadership team to lower costs and increase the value of membership. Our members create enormous value for IEEE through their contributions. Let us "Engineer the Future," and create an IEEE "Of the Members," "By the Members," and "For the Members." Thank you for your vote and support. IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals.
  • Fast, Efficient Neural Networks Copy Dragonfly Brains
    Jul 30, 2021 08:00 AM PDT
    In each of our brains, 86 billion neurons work in parallel, processing inputs from senses and memories to produce the many feats of human cognition. The brains of other creatures are less broadly capable, but those animals often exhibit innate aptitudes for particular tasks, abilities honed by millions of years of evolution. Most of us have seen animals doing clever things. Perhaps your house pet is an escape artist. Maybe you live near the migration path of birds or butterflies and celebrate their annual return. Or perhaps you have marveled at the seeming single-mindedness with which ants invade your pantry Looking to such specialized nervous systems as a model for artificial intelligence may prove just as valuable, if not more so, than studying the human brain. Consider the brains of those ants in your pantry. Each has some 250,000 neurons. Larger insects have closer to 1 million. In my research at Sandia National Laboratories in Albuquerque, I study the brains of one of these larger insects, the dragonfly. I and my colleagues at Sandia, a national-security laboratory, hope to take advantage of these insects' specializations to design computing systems optimized for tasks like intercepting an incoming missile or following an odor plume. By harnessing the speed, simplicity, and efficiency of the dragonfly nervous system, we aim to design computers that perform these functions faster and at a fraction of the power that conventional systems consume. Looking to a dragonfly as a harbinger of future computer systems may seem counterintuitive. The developments in artificial intelligence and machine learning that make news are typically algorithms that mimic human intelligence or even surpass people's abilities. Neural networks can already perform as well—if not better—than people at some specific tasks, such as detecting cancer in medical scans. And the potential of these neural networks stretches far beyond visual processing. The computer program AlphaZero, trained by self-play, is the best Go player in the world. Its sibling AI, AlphaStar, ranks among the best Starcraft II players. Such feats, however, come at a cost. Developing these sophisticated systems requires massive amounts of processing power, generally available only to select institutions with the fastest supercomputers and the resources to support them. And the energy cost is off-putting. Recent estimates suggest that the carbon emissions resulting from developing and training a natural-language processing algorithm are greater than those produced by four cars over their lifetimes. It takes the dragonfly only about 50 milliseconds to begin to respond to a prey's maneuver. If we assume 10 ms for cells in the eye to detect and transmit information about the prey, and another 5 ms for muscles to start producing force, this leaves only 35 ms for the neural circuitry to make its calculations. Given that it typically takes a single neuron at least 10 ms to integrate inputs, the underlying neural network can be at least three layers deep. But does an artificial neural network really need to be large and complex to be useful? I believe it doesn't. To reap the benefits of neural-inspired computers in the near term, we must strike a balance between simplicity and sophistication. Which brings me back to the dragonfly, an animal with a brain that may provide precisely the right balance for certain applications. If you have ever encountered a dragonfly, you already know how fast these beautiful creatures can zoom, and you've seen their incredible agility in the air. Maybe less obvious from casual observation is their excellent hunting ability: Dragonflies successfully capture up to 95 percent of the prey they pursue, eating hundreds of mosquitoes in a day. The physical prowess of the dragonfly has certainly not gone unnoticed. For decades, U.S. agencies have experimented with using dragonfly-inspired designs for surveillance drones. Now it is time to turn our attention to the brain that controls this tiny hunting machine. While dragonflies may not be able to play strategic games like Go, a dragonfly does demonstrate a form of strategy in the way it aims ahead of its prey's current location to intercept its dinner. This takes calculations performed extremely fast—it typically takes a dragonfly just 50 milliseconds to start turning in response to a prey's maneuver. It does this while tracking the angle between its head and its body, so that it knows which wings to flap faster to turn ahead of the prey. And it also tracks its own movements, because as the dragonfly turns, the prey will also appear to move. The model dragonfly reorients in response to the prey's turning. The smaller black circle is the dragonfly's head, held at its initial position. The solid black line indicates the direction of the dragonfly's flight; the dotted blue lines are the plane of the model dragonfly's eye. The red star is the prey's position relative to the dragonfly, with the dotted red line indicating the dragonfly's line of sight. So the dragonfly's brain is performing a remarkable feat, given that the time needed for a single neuron to add up all its inputs—called its membrane time constant—exceeds 10 milliseconds. If you factor in time for the eye to process visual information and for the muscles to produce the force needed to move, there's really only time for three, maybe four, layers of neurons, in sequence, to add up their inputs and pass on information Could I build a neural network that works like the dragonfly interception system? I also wondered about uses for such a neural-inspired interception system. Being at Sandia, I immediately considered defense applications, such as missile defense, imagining missiles of the future with onboard systems designed to rapidly calculate interception trajectories without affecting a missile's weight or power consumption. But there are civilian applications as well. For example, the algorithms that control self-driving cars might be made more efficient, no longer requiring a trunkful of computing equipment. If a dragonfly-inspired system can perform the calculations to plot an interception trajectory, perhaps autonomous drones could use it to avoid collisions. And if a computer could be made the same size as a dragonfly brain (about 6 cubic millimeters), perhaps insect repellent and mosquito netting will one day become a thing of the past, replaced by tiny insect-zapping drones! To begin to answer these questions, I created a simple neural network to stand in for the dragonfly's nervous system and used it to calculate the turns that a dragonfly makes to capture prey. My three-layer neural network exists as a software simulation. Initially, I worked in Matlab simply because that was the coding environment I was already using. I have since ported the model to Python. Because dragonflies have to see their prey to capture it, I started by simulating a simplified version of the dragonfly's eyes, capturing the minimum detail required for tracking prey. Although dragonflies have two eyes, it's generally accepted that they do not use stereoscopic depth perception to estimate distance to their prey. In my model, I did not model both eyes. Nor did I try to match the resolution of a dragonfly eye. Instead, the first layer of the neural network includes 441 neurons that represent input from the eyes, each describing a specific region of the visual field—these regions are tiled to form a 21-by-21-neuron array that covers the dragonfly's field of view. As the dragonfly turns, the location of the prey's image in the dragonfly's field of view changes. The dragonfly calculates turns required to align the prey's image with one (or a few, if the prey is large enough) of these "eye" neurons. A second set of 441 neurons, also in the first layer of the network, tells the dragonfly which eye neurons should be aligned with the prey's image, that is, where the prey should be within its field of view. The model dragonfly engages its prey. Processing—the calculations that take input describing the movement of an object across the field of vision and turn it into instructions about which direction the dragonfly needs to turn—happens between the first and third layers of my artificial neural network. In this second layer, I used an array of 194,481 (214) neurons, likely much larger than the number of neurons used by a dragonfly for this task. I precalculated the weights of the connections between all the neurons into the network. While these weights could be learned with enough time, there is an advantage to "learning" through evolution and preprogrammed neural network architectures. Once it comes out of its nymph stage as a winged adult (technically referred to as a teneral), the dragonfly does not have a parent to feed it or show it how to hunt. The dragonfly is in a vulnerable state and getting used to a new body—it would be disadvantageous to have to figure out a hunting strategy at the same time. I set the weights of the network to allow the model dragonfly to calculate the correct turns to intercept its prey from incoming visual information. What turns are those? Well, if a dragonfly wants to catch a mosquito that's crossing its path, it can't just aim at the mosquito. To borrow from what hockey player Wayne Gretsky once said about pucks, the dragonfly has to aim for where the mosquito is going to be. You might think that following Gretsky's advice would require a complex algorithm, but in fact the strategy is quite simple: All the dragonfly needs to do is to maintain a constant angle between its line of sight with its lunch and a fixed reference direction. Readers who have any experience piloting boats will understand why that is. They know to get worried when the angle between the line of sight to another boat and a reference direction (for example due north) remains constant, because they are on a collision course. Mariners have long avoided steering such a course, known as parallel navigation, to avoid collisions These three heat maps show the activity patterns of neurons at the same moment; the first set represents the eye, the second represents those neurons that specify which eye neurons to align with the prey's image, and the third represents those that output motor commands. Translated to dragonflies, which want to collide with their prey, the prescription is simple: keep the line of sight to your prey constant relative to some external reference. However, this task is not necessarily trivial for a dragonfly as it swoops and turns, collecting its meals. The dragonfly does not have an internal gyroscope (that we know of) that will maintain a constant orientation and provide a reference regardless of how the dragonfly turns. Nor does it have a magnetic compass that will always point north. In my simplified simulation of dragonfly hunting, the dragonfly turns to align the prey's image with a specific location on its eye, but it needs to calculate what that location should be. The third and final layer of my simulated neural network is the motor-command layer. The outputs of the neurons in this layer are high-level instructions for the dragonfly's muscles, telling the dragonfly in which direction to turn. The dragonfly also uses the output of this layer to predict the effect of its own maneuvers on the location of the prey's image in its field of view and updates that projected location accordingly. This updating allows the dragonfly to hold the line of sight to its prey steady, relative to the external world, as it approaches. It is possible that biological dragonflies have evolved additional tools to help with the calculations needed for this prediction. For example, dragonflies have specialized sensors that measure body rotations during flight as well as head rotations relative to the body—if these sensors are fast enough, the dragonfly could calculate the effect of its movements on the prey's image directly from the sensor outputs or use one method to cross-check the other. I did not consider this possibility in my simulation. To test this three-layer neural network, I simulated a dragonfly and its prey, moving at the same speed through three-dimensional space. As they do so my modeled neural-network brain "sees" the prey, calculates where to point to keep the image of the prey at a constant angle, and sends the appropriate instructions to the muscles. I was able to show that this simple model of a dragonfly's brain can indeed successfully intercept other bugs, even prey traveling along curved or semi-random trajectories. The simulated dragonfly does not quite achieve the success rate of the biological dragonfly, but it also does not have all the advantages (for example, impressive flying speed) for which dragonflies are known. More work is needed to determine whether this neural network is really incorporating all the secrets of the dragonfly's brain. Researchers at the Howard Hughes Medical Institute's Janelia Research Campus, in Virginia, have developed tiny backpacks for dragonflies that can measure electrical signals from a dragonfly's nervous system while it is in flight and transmit these data for analysis. The backpacks are small enough not to distract the dragonfly from the hunt. Similarly, neuroscientists can also record signals from individual neurons in the dragonfly's brain while the insect is held motionless but made to think it's moving by presenting it with the appropriate visual cues, creating a dragonfly-scale virtual reality. Data from these systems allows neuroscientists to validate dragonfly-brain models by comparing their activity with activity patterns of biological neurons in an active dragonfly. While we cannot yet directly measure individual connections between neurons in the dragonfly brain, I and my collaborators will be able to infer whether the dragonfly's nervous system is making calculations similar to those predicted by my artificial neural network. That will help determine whether connections in the dragonfly brain resemble my precalculated weights in the neural network. We will inevitably find ways in which our model differs from the actual dragonfly brain. Perhaps these differences will provide clues to the shortcuts that the dragonfly brain takes to speed up its calculations. This backpack that captures signals from electrodes inserted in a dragonfly's brain was created by Anthony Leonardo, a group leader at Janelia Research Campus.Anthony Leonardo/Janelia Research Campus/HHMI Dragonflies could also teach us how to implement "attention" on a computer. You likely know what it feels like when your brain is at full attention, completely in the zone, focused on one task to the point that other distractions seem to fade away. A dragonfly can likewise focus its attention. Its nervous system turns up the volume on responses to particular, presumably selected, targets, even when other potential prey are visible in the same field of view. It makes sense that once a dragonfly has decided to pursue a particular prey, it should change targets only if it has failed to capture its first choice. (In other words, using parallel navigation to catch a meal is not useful if you are easily distracted.) Even if we end up discovering that the dragonfly mechanisms for directing attention are less sophisticated than those people use to focus in the middle of a crowded coffee shop, it's possible that a simpler but lower-power mechanism will prove advantageous for next-generation algorithms and computer systems by offering efficient ways to discard irrelevant inputs The advantages of studying the dragonfly brain do not end with new algorithms; they also can affect systems design. Dragonfly eyes are fast, operating at the equivalent of 200 frames per second: That's several times the speed of human vision. But their spatial resolution is relatively poor, perhaps just a hundredth of that of the human eye. Understanding how the dragonfly hunts so effectively, despite its limited sensing abilities, can suggest ways of designing more efficient systems. Using the missile-defense problem, the dragonfly example suggests that our antimissile systems with fast optical sensing could require less spatial resolution to hit a target. The dragonfly isn't the only insect that could inform neural-inspired computer design today. Monarch butterflies migrate incredibly long distances, using some innate instinct to begin their journeys at the appropriate time of year and to head in the right direction. We know that monarchs rely on the position of the sun, but navigating by the sun requires keeping track of the time of day. If you are a butterfly heading south, you would want the sun on your left in the morning but on your right in the afternoon. So, to set its course, the butterfly brain must therefore read its own circadian rhythm and combine that information with what it is observing. Other insects, like the Sahara desert ant, must forage for relatively long distances. Once a source of sustenance is found, this ant does not simply retrace its steps back to the nest, likely a circuitous path. Instead it calculates a direct route back. Because the location of an ant's food source changes from day to day, it must be able to remember the path it took on its foraging journey, combining visual information with some internal measure of distance traveled, and then calculate its return route from those memories. While nobody knows what neural circuits in the desert ant perform this task, researchers at the Janelia Research Campus have identified neural circuits that allow the fruit fly to self-orient using visual landmarks. The desert ant and monarch butterfly likely use similar mechanisms. Such neural circuits might one day prove useful in, say, low-power drones. And what if the efficiency of insect-inspired computation is such that millions of instances of these specialized components can be run in parallel to support more powerful data processing or machine learning? Could the next AlphaZero incorporate millions of antlike foraging architectures to refine its game playing? Perhaps insects will inspire a new generation of computers that look very different from what we have today. A small army of dragonfly-interception-like algorithms could be used to control moving pieces of an amusement park ride, ensuring that individual cars do not collide (much like pilots steering their boats) even in the midst of a complicated but thrilling dance. No one knows what the next generation of computers will look like, whether they will be part-cyborg companions or centralized resources much like Isaac Asimov's Multivac. Likewise, no one can tell what the best path to developing these platforms will entail. While researchers developed early neural networks drawing inspiration from the human brain, today's artificial neural networks often rely on decidedly unbrainlike calculations. Studying the calculations of individual neurons in biological neural circuits—currently only directly possible in nonhuman systems—may have more to teach us. Insects, apparently simple but often astonishing in what they can do, have much to contribute to the development of next-generation computers, especially as neuroscience research continues to drive toward a deeper understanding of how biological neural circuits work. So next time you see an insect doing something clever, imagine the impact on your everyday life if you could have the brilliant efficiency of a small army of tiny dragonfly, butterfly, or ant brains at your disposal. Maybe computers of the future will give new meaning to the term "hive mind," with swarms of highly specialized but extremely efficient minuscule processors, able to be reconfigured and deployed depending on the task at hand. With the advances being made in neuroscience today, this seeming fantasy may be closer to reality than you think. This article appears in the August 2021 print issue as "Lessons From a Dragonfly's Brain."
  • IoT-ize Your Old Gadgets With a Mechanical Hijacking Device
    Jul 30, 2021 06:00 AM PDT
    Just about every device you own is probably available with Internet connectivity. Whether or not every device you own actually needs Internet connectivity is up for debate. But if you want it, it's there—as long as you're able to afford it, of course. Connectivity usually comes at a premium, and it also usually involves buying a brand new whatever it is, because new hardware and software and services are required. If connectivity is important to you, there aren't many options for older devices. It's not hard to turn them on and off with a connected socket adapter of some sort, but as you start to go back more than a few years, things become increasingly designed for direct human interaction rather than for the Internet, with buttons and switches and dials and whatnot. IoTIZER is a prototype of a mechanical hijacking device (MHD), designed to replace human manipulation of existing products. As the name suggests, it can IoT-ize just about anything designed to be operated by a human, potentially giving a new connected life to your stuff. The IoTIZER MHD is capable of handling just about anything with an interface that can be pushed, pulled, or twisted. It was inspired by a 2D plotter with the ability to rotate and extend, and also includes an extra degree of freedom that allows it to push buttons. A small stick-on adapter enables additional motions, like rotating knobs. Special attention was paid to the software, which was designed to be as easy to use as possible, making the MHD accessible for people without much in the way of technical background. Or, that was the idea, anyway. As of right now, IoTIZER is a research-through-design project from KAIST in Korea. Part of the project involved putting some prototypes in the homes of potential users. According to the researchers, a number of the initial trial's 14 participants liked the fact that the MHD was so customizable in terms of operating times, sequences, and conditions—adding functionality even to devices that already had some level of connectivity. And because MHD uses physical hardware to interact with physical interfaces, even non-technical users had success in setting it up. As one user commented, "I think it is easy, because I can do it while looking at the product. Well, I can just put it there and check." Participants installed the IoTIZER on a range of products, including consumer appliances and devices as well as various items around the home, including a recliner and a door lock. KAIST/Korea Polytechnic University The paper doesn't directly address the cost of the MHD, and with three motors and some other electronics in it, it's not likely to be super cheap. I'd expect it to cost at least USD $100, although perhaps it could get somewhat cheaper in volume. If there's one specific device that you'd like to automate in one specific, many cheaper alternatives may be available (like connected power switches), or it could even be cheaper to buy an entirely new device. But doing so is wasteful, whereas the MHD potentially gives a smart new life to many old things. It's also possible to build your own MHD device for cheap, if you know what you're doing. But for the vast majority of people, a DIY approach isn't practical, and buying something that works out of the box and comes with a friendly app seems like it would be worth a cost premium. Although, we should reiterate that the IoTIZER is not commercially available, and may never be commercially available, so if you're really into the idea, a DIY version might be the way to go. The researchers also share this interesting take on the future of MHDs at the end of their paper: One potential solution to these challenges is a mechanical hijacking robot rather than a device. Recent technological developments are introducing personal service robots into the home. If these robots are commercialized before the full deployment of the IoT, they could more intelligently provide the MHD experience. I'm fairly certain that personal service robots with the ability to manipulate objects the way IoTIZER can won't be in most homes for at least a decade. Although, it's interesting to think about what you might be able to do with the addition of a small camera that could recognize things like sounds or flashing buttons or basic text. Perhaps I just need to expand my idea of what a personal service robot actually is—if it's just a little box with a couple of servos and some sensors that can intelligently hijack whatever I want it to, that could be good enough for me.
  • Video Friday: Android Printing
    Jul 30, 2021 01:38 AM PDT
    Your weekly selection of awesome robot videos Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far ( send us your events!): RO-MAN 2021 – August 8-12, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA Let us know if you have suggestions for next week, and enjoy today's videos. Designing an actuated android face is complicated and time consuming, but Hiroshi Ishiguro's lab has come up with a system to use a multi-material 3D printer to produce an android head (including skin and mechanical components) all at once. Just shove an actuator pack in the back, and you're good to go. With 31 degrees of freedom, the idea is that you'd be able to quickly iterate on designs without having to do a bunch of actuator adjustments at the same time. [ Paper ] The OSU Dynamic Robotics Laboratory's research team, led by Agility Robotics' Jonathan Hurst, combined expertise from biomechanics and robot controls with new machine learning tools to accomplish something new: train a bipedal robot to run a full 5K on a single battery charge! In preparation for the previous video, Cassie ran a 5k on a turf field, clocking in a (world record?) time of 43:59. [ Agility Robotics ] GITAI's prototype lunar rover testing regimen is amazing and ridiculous—and definitely watch until the end. Please do all of this stuff on the moon. Please? [ GITAI ] My advice: mute this video and use the following video as a soundtrack starting at about 50 seconds in. [ ODRI ] We all know that in theory, a repetitive motion from a fixed position is not all that hard for robots. But there's a lot of air to cover between the half line and the hoop, right? [ Toyota ] Thanks, Harry! Here's some super cool research presented at RSS, showing some highly dexterous fingertips made of tiny delta robots. [ Project Page ] If you missed the drone show at the Olympics opening ceremony, it was sufficiently impressive, with 1824 drones in attendance. I'm pretty sure I saw at least one drone falling out of the sky, though. [ NBC ] The question with UBTECH's Walker X is whether it's going to be another ASIMO (cool to watch but doesn't do much), or something that's, you know, useful. [ UBTECH ] We usually think of telepresence as one operator and one robot, but the efficient thing to do is have a human who can be sometimes in the loop for a bunch of different robots. Here's a good start. [ i-BOTICS ] Thanks, Fan! This is a little weird, but I like it! [ Azumi Maekawa ] Thanks, Fan! When you've got too many drones on your drone. [ NIMBUS Lab ] Good to see Relay out there Relaying. [ Savioke ] Recently, the European Union funded the project PULSAR (Prototype of an Ultra Large Structure Assembly Robot) through the Space Robotic Technologies program within Horizon 2020. PULSAR aims to develop and demonstrate the technology that will allow on-orbit precise assembly of a segmented mirror using an autonomous robotic system. [ Pulsar ] I will once again point out that if a kitchen robot can't do prep or cleanup, I don't consider it to be all that useful. [ Moley ] Pro tip: You can always tell when someone is trying to sell something to the military when it has generic hard rock as its soundtrack. [ Lockheed Martin ] In episode seven of The Robot Brains Podcast, our guest is ABB's Marc Segura. Marc is the Managing Director of consumer segments and service robotics at ABB. In this clip Marc explains the differences between Boston Dynamics' and ABB's robots. [ Robot Brains ] I'd suggest skipping through the vast majority of this domino robot video, but the bits where they actually talk about the mechanical stuff are interesting. [ YouTube ] Spot, Boston Dynamics quadrupedal robot product, can reliably walk nearly anywhere a human can, but what does it do? While legged mobility is a necessary skill for many use cases, it's not the only requirement for valuable producing applications. This talk will focus on the work we have done to develop Spot for remote and autonomous sensing applications: i) Encapsulating Spot's mobility in an extensible API, ii) Building an autonomous capability, iii) Making it easy to add sensing to build value producing solutions. [ ICRA Legged Robots ]
  • Intel: Back on Top by 2025?
    Jul 29, 2021 01:41 PM PDT
    Intel has spent the last few years dealing with criticism that it had lost its lead against rivals in the race to put ever more and better performing transistors on each square millimeter of silicon. This week, Intel unveiled a roadmap that company executives say will put them at the head of the pack by 2025. Acknowledging that the naming convention for new semiconductor manufacturing process technologies has become meaningless, Intel will stop using "nanometer" in its new node names after the 10-nanometer node, which is in high volume production now. Intel's 10-nm technology is a good illustration of the need for a name change, because the process appears to make transistor features on par with TSMC's and Samsung's 7-nm technology. Its newest node, previously called Enhanced SuperFin, Intel has rechristened Intel 7. That technology has entered volume production, and it offers a 10-15 percent performance per watt improvement. Basically, that means a 10-15 percent faster transistor at a fixed power or 10-15 percent better power efficiency at a fixed operating frequency, or something in between. Intel 7 takes advantage of an improved version of the FinFET transistor. In FinFETs, the transistor channel, through which current flows, is a vertical fin-shaped protrusion surrounded on three sides by the transistor gate. It's been the leading-edge configuration of the device since 2011. Intel introduced what it calls SuperFin with the 10-nm node.The next stage, Intel 4, is what the company used to call 7-nanometers. It sticks with the SuperFin, gains 20 percent performance per watt over Intel 7. The process makes more use of extreme-ultraviolet lithography, the most advanced chip-making tech, as well. Intel began making its Meteor Lake processor using the process for one client in the second quarter of 2022, and it will use it for the Granite Rapids data center chip, as well. (This stage of production is often called "risk" production, because initial customers are taking a risk that the new manufacturing process will work for their designs.) Intel 3 will follow in the second half of 2023, using even more EUV to reduce logic cell area. It sticks with the FinFET structure, but through reduced resistance in the vertical connections to the transistor and a greater ability to drive current through the transistor, it manages an 18 percent performance per watt gain over Intel 4. The final step the company detailed was Intel 20A, set to ramp up to volume production in 2024. The 'A' refers to the Angstrom, according to Intel, an apparent nod to the idea that the node name system has run out of nanometers. It's with Intel 20A that things get really interesting. With 20A, Intel says it will abandon the trusty FinFET for what it's calling RibbonFET but others call nanosheets. With nanosheets, the transistor channel forms in stack of sheets of semiconductor. Unlike with FinFETs, where the gate covers the channel region on three sides, the nanosheets are completely surrounded by the gate, improving the electrostatic control of the transistor, among other benefits. Samsung's 3-nanometer technology will be based on nanosheets, which that company calls multichannel bridge FETs. It's set to begin manufacturing in 2022. Like Intel, TSMC is delaying the move away from FinFETs to a later node. Intel's new RibbonFET technology, the company's implementation of a gate-all-around transistor. Intel Perhaps just as essential as the change in transistor architecture is the what Intel is calling the PowerVia. It is essential—what Imec and Arm have been calling back-side power delivery with buried power rails. In that scheme, all the interconnects that deal with delivering power rather than data, are moved beneath the transistors. This has two effects. First, it greatly reduces resistance in the power delivery network, so there's less of a voltage drop from the power source to the transistors themselves. And second, it leaves more room above the transistors for data-carrying interconnects, potentially shrinking the size of logic cells. The combination of RibbonFET and PowerVia "will be another watershed moment in process technology," says Ann Kelleher, senior vice president and general manager of Technology Development at Intel. The company is developing 20A's successor as well, to be called 18A. It will include refinements of the RibbonFET for better performance. It may be helped along by new lithography technology under development at ASML called high numerical aperture EUV. Intel says it will be the first chip company to deploy a High NA system. Intel Introduces New RibbonFET and PowerVia Technologies To help keep it on top and boost its foundry business as well, Intel is also counting on advances in its advanced packaging technologies. These will expand an already-begun move toward building systems out of multiple smaller dies called chiplets instead of a single large piece of silicon. This allows different circuits and logic blocks to be built using the most appropriate process technology. For example, processor tiles can be constructed using the most advanced node, while an older, more economic node is employed to build the less demanding systems. Intel The company plans to boost the density of connections for its EMIB technology, which is used to link two silicon dies horizontally in the same package. In 2023 it will also introduce two new 3D chip stacking technologies Foveros Omni and Foveros Direct. Omni increases the flexibility of Foveros, the company says. Direct increases the density of interconnects in the 3D stack by an order of magnitude.
  • This New Site
    Jul 29, 2021 12:27 PM PDT
    For visitors familiar with our old site, welcome to our latest incarnation, which makes it easy to go deep into a topic like machine learning, get a tutorial on the topological materials that could disrupt the semiconductor industry (when will the chip shortage end, anyway?), or scour the latest edition of Top Programming Languages (coming in August), for an emerging language poised to upend your corner of the tech sector. If you're reading this in the physical pages of IEEE Spectrum, there's a good chance you don't visit our website much, if ever. We get it. Up to now, there hasn't been a particular reason to visit if you're satisfied dipping into the pages of Spectrum on a monthly basis. So with the generous support of the IEEE New Initiatives Committee, we've designed an experience around IEEE members both current and future. We're running on one of the Web's fastest platforms, RebelMouse, to deliver beautifully textured screens studded with stunning photography and engrossing infographics to surround our award-winning journalism and columnists like Rodney Brooks, Vaclav Smil, and Allison Marsh. For this new site design we were fortunate to work with the artists at Pentagram, who were also responsible for the recent redesign of our print edition. Once you're signed in as a member (see upper right corner), the real fun begins. First, you'll see that clicking on your name in the upper right-hand corner brings up a link to your personal profile page. Here you can choose an avatar that will appear at the top of the page along with your membership grade, status, and other details. That same avatar will appear next to comments you make on articles throughout the site. Your comments will instantly appear when you post them—no more premoderation for members---but please look at our commenting policy, which aims to foster civil discourse on some of the most daunting challenges facing our planet. Once you start commenting, you can track activity in threads you're involved in right from your profile page. See something of interest, but can't read it right away? Save that article to read later by clicking on the bookmark at the top of the post and a link to it will appear on your profile. You can manage your Spectrum newsletter subscriptions from the same page. You can also create a feed of the latest posts related to the topics you're most interested in. In the mood to check out the latest print issue of Spectrum but the mail is slow or you're a digital subscriber? Download a replica PDF directly from your profile page. Did someone refer you to an article Elon Musk wrote for us back in 2009? As a member, you have access to Spectrum and The Institute's print archives going back to 2000. And there's more for you here than just your profile page and thousands of stories: We have a new podcast (Fixing the Future, hosted by Contributing Editor Steven Cherry) and webinars and white papers that go deep on our sponsors' engineering tools. We're also starting something we call Spectrum Collections, curated bundles of our articles on specific topics that showcase our most popular Hands On DIY projects, say, or a menagerie of the world's most influential CPUs. Spectrum's best stories have at their heart people coming together to solve hard problems. And exactly that has been happening here at Spectrum over the last year, as an indefatigable team led by me; Erico Guizzo, Digital Product Manager; and Preeti Kulkarni, Spectrum Online & Web Application Development Manager tackled digital media's hard problems in partnership with our colleagues at Pentagram, RebelMouse, Interface Guru, and IEEE IT. Enjoy the fruits of our efforts. After you've had some time to experience this new site, we'd love to know what you think. Please log in and share your thoughts with us, including what you'd like to see in the future, in the comment section below. For fun, finish the sentence "This new site is..."
  • A Circuit to Boost Battery Life
    Jul 29, 2021 08:04 AM PDT
    YOU'VE PROBABLY PLAYED hundreds, maybe thousands, of videos on your smartphone. But have you ever thought about what happens when you press “play”? The instant you touch that little triangle, many things happen at once. In microseconds, idle compute cores on your phone's processor spring to life. As they do so, their voltages and clock frequencies shoot up to ensure that the video decompresses and displays without delay. Meanwhile, other cores, running tasks in the background, throttle down. Charge surges into the active cores' millions of transistors and slows to a trickle in the newly idled ones. This dance, called dynamic voltage and frequency scaling (DVFS), happens continually in the processor, called a system-on-chip (SoC), that runs your phone and your laptop as well as in the servers that back them. It's all done in an effort to balance computational performance with power consumption, something that's particularly challenging for smartphones. The circuits that orchestrate DVFS strive to ensure a steady clock and a rock-solid voltage level despite the surges in current, but they are also among the most backbreaking to design. That's mainly because the clock-generation and voltage-regulation circuits are analog, unlike almost everything else on your smartphone SoC. We've grown accustomed to a near-yearly introduction of new processors with substantially more computational power, thanks to advances in semiconductor manufacturing. “Porting” a digital design from an old semiconductor process to a new one is no picnic, but it's nothing compared to trying to move analog circuits to a new process. The analog components that enable DVFS, especially a circuit called a low-dropout voltage regulator (LDO), don't scale down like digital circuits do and must basically be redesigned from scratch with every new generation. If we could instead build LDOs—and perhaps other analog circuits—from digital components, they would be much less difficult to port than any other part of the processor, saving significant design cost and freeing up engineers for other problems that cutting-edge chip design has in store. What's more, the resulting digital LDOs could be much smaller than their analog counterparts and perform better in certain ways. Research groups in industry and academia have tested at least a dozen designs over the past few years, and despite some shortcomings, a commercially useful digital LDO may soon be in reach. Low-dropout voltage regulators (LDOs) allow multiple processor cores on the same input voltage rail (VIN) to operate at different voltages according to their workloads. In this case, Core 1 has the highest performance requirement. Its head switch, really a group of transistors connected in parallel, is closed, bypassing the LDO and directly connecting Core 1 to VIN, which is supplied by an external power management IC. Cores 2 through 4, however, have less demanding workloads. Their LDOs are engaged to supply the cores with voltages that will save power. The basic analog low-dropout voltage regulator [left] controls voltage through a feedback loop. It tries to make the output voltage (VDD) equal to the reference voltage by controlling the current through the power PFET. In the basic digital design [right], an independent clock triggers a comparator [triangle] that compares the reference voltage to VDD. The result tells control logic how many power PFETs to activate. A TYPICAL SYSTEM-ON-CHIP for a smartphone is a marvel of integration. On a single sliver of silicon it integrates multiple CPU cores, a graphics processing unit, a digital signal processor, a neural processing unit, an image signal processor, as well as a modem and other specialized blocks of logic. Naturally, boosting the clock frequency that drives these logic blocks increases the rate at which they get their work done. But to operate at a higher frequency, they also need a higher voltage. Without that, transistors can't switch on or off before the next tick of the processor clock. Of course, a higher frequency and voltage comes at the cost of power consumption. So these cores and logic units dynamically change their clock frequencies and supply voltages—often ranging from 0.95 to 0.45 volts— based on the balance of energy efficiency and performance they need to achieve for whatever workload they are assigned—shooting video, playing back a music file, conveying speech during a call, and so on. Typically, an external power-management IC generates multiple input voltage (VIN) values for the phone's SoC. These voltages are delivered to areas of the SoC chip along wide interconnects called rails. But the number of connections between the power-management chip and the SoC is limited. So, multiple cores on the SoC must share the same VIN rail. But they don't have to all get the same voltage, thanks to the low-dropout voltage regulators. LDOs along with dedicated clock generators allow each core on a shared rail to operate at a unique supply voltage and clock frequency. The core requiring the highest supply voltage determines the shared VIN value. The power-management chip sets VIN to this value and this core bypasses the LDO altogether through transistors called head switches. To keep power consumption to a minimum, other cores can operate at a lower supply voltage. Software determines what this voltage should be, and analog LDOs do a pretty good job of supplying it. They are compact, low cost to build, and relatively simple to integrate on a chip, as they do not require large inductors or capacitors. But these LDOs can operate only in a particular window of voltage. On the high end, the target voltage must be lower than the difference between VIN and the voltage drop across the LDO itself (the eponymous “dropout” voltage). For example, if the supply voltage that would be most efficient for the core is 0.85 V, but VIN is 0.95 V and the LDO's dropout voltage is 0.15 V, that core can't use the LDO to reach 0.85 V and must work at the 0.95 V instead, wasting some power. Similarly, if VIN has already been set below a certain voltage limit, the LDO's analog components won't work properly and the circuit can't be engaged to reduce the core supply voltage further. The main obstacle that has limited use of digital LDOs so far is the slow transient response. However, if the desired voltage falls inside the LDO's window, software enables the circuit and activates a reference voltage equal to the target supply voltage. HOW DOES THE LDO supply the right voltage? In the basic analog LDO design, it's by means of an operational amplifier, feedback, and a specialized power p-channel field effect transistor (PFET). The latter is a transistor that reduces its current with increasing voltage to its gate. The gate voltage to this power PFET is an analog signal coming from the op amp, ranging from 0 volts to VIN. The op amp continuously compares the circuit's output voltage—the core's supply voltage, or VDD—to the target reference voltage. If the LDO's output voltage falls below the reference voltage—as it would when newly active logic suddenly demands more current—the op amp reduces the power PFET's gate voltage, increasing current and lifting VDD toward the reference voltage value. Conversely, if the output voltage rises above the reference voltage—as it would when a core's logic is less active—then the op amp increases the transistor's gate voltage to reduce current and lower VDD. A basic digital LDO, on the other hand, is made up of a voltage comparator, control logic, and a number of parallel power PFETs. (The LDO also has its own clock circuit, separate from those used by the processor core.) In the digital LDO, the gate voltages to the power PFETs are binary values instead of analog, either 0 V or VIN. With each tick of the clock, the comparator measures whether the output voltage is below or above the target voltage provided by the reference source. The comparator output guides the control logic in determining how many of the power PFETs to activate. If the LDO's output is below target, the control logic will activate more power PFETs.Their combined current props up the core's supply voltage, and that value feeds back to the comparator to keep it on target. If it overshoots, the comparator signals to the control logic to switch some of the PFETs off. NEITHER THE ANALOG nor the digital LDO is ideal, of course. The key advantage of an analog design is that it can respond rapidly to transient droops and overshoots in the supply voltage, which is especially important when those events involve steep changes. These transients occur because a core's demand for current can go up or down greatly in a matter of nanoseconds. In addition to the fast response, analog LDOs are very good at suppressing variations in VIN that might come in from the other cores on the rails. And, finally, when current demands are not changing much, it controls the output tightly without constantly overshooting and undershooting the target in a way that introduces ripples in VDD. When a core's current requirement changes suddenly it can cause the LDO's output voltage to overshoot or droop [top]. Basic digital LDO designs do not handle this well [bottom left]. However, a scheme called adaptive sampling with reduced dynamic stability [bottom right] can reduce the extent of the voltage excursion. It does this by ramping up the LDO's sample frequency when the droop gets too large, allowing the circuit to respond faster. Source: S.B. Nasir et al., IEEE International Solid-State Circuits Conference (ISSCC), February 2015, pp. 98–99. These attributes have made analog LDOs attractive not just for supplying processor cores, but for almost any circuit demanding a quiet, steady supply voltage. However, there are some critical challenges that limit the effectiveness of these designs. First analog components are much more complex than digital logic, requiring lengthy design times to implement them in advanced technology nodes. Second, they don't operate properly when VIN is low, limiting how low a VDD they can deliver to a core. And finally, the dropout voltage of analog LDOs isn't as small as designers would like. Taking those last points together, analog LDOs offer a limited voltage window at which they can operate. That means there are missed opportunities to enable LDOs for power saving—ones big enough to make a noticeable difference in a smartphone's battery life. Digital LDOs undo many of these weaknesses: With no complex analog components, they allow designers to tap into a wealth of tools and other resources for digital design. So scaling down the circuit for a new process technology will need much less effort. Digital LDOs will also operate over a wider voltage range. At the low-voltage end, the digital components can operate at VIN values that are off-limits to analog components. And in the higher range, the digital LDO's dropout voltage will be smaller, resulting in meaningful core-power savings. But nothing's free, and the digital LDO has some serious drawbacks. Most of these arise because the circuit measures and alters its output only at discrete times, instead of continuously. That means the circuit has a comparatively slow response to supply voltage droops and overshoots. It's also more sensitive to variations in VIN, and it tends to produce small ripples in the output voltage, both of which could degrade a core's performance. How Much Power Do LDOs Save? It might seem straightforward that low-dropout voltage regulators (LDOs) could minimize processor power consumption by allowing cores to run at a variety of power levels, but exactly how do they do that? The total power consumed by a core is simply the product of the supply voltage and the current through that core. But voltage and current each have both a static component and a dynamic one—dependent on how frequently transistors are switching. The core current's static component is made up of the current that leaks across devices even when the transistors are not switching and is dependent on supply voltage. Its dynamic component, on the other hand, is a product of capacitance, clock frequency, and supply voltage. For a core connected directly to a voltage rail supplied by the external power supply IC, lowering VIN results in a quadratic reduction in dynamic power with respect to frequency plus a static power reduction that depends on the sensitivity of leakage current to VIN. So lowering the rail voltage saves quite a lot. For cores using the LDO to deliver a supply voltage that is lower than VIN, you have to take into account the power consumed by the LDO itself. At a minimum, that's the product of the voltage across the LDO (the eponymous dropout voltage in the circuit's name) and the core current. When you factor that in, the dynamic power saving from lowering the voltage is a linear relation to supply voltage rather than the quadratic one you get without the LDO. Even so, using an LDO to scale supply voltage is worthwhile. LDOs significantly lower the SoC processor power by allowing multiple cores on a shared VIN to operate at lower voltage values. Of these, the main obstacle that has limited the use of digital LDOs so far is their slow transient response. Cores experience droops and overshoots when the current they draw abruptly changes in response to a change in its workload. The LDO response time to droop events is critical to limiting how far voltage falls and how long that condition lasts. Conventional cores add a safety margin to the supply voltage to ensure correct operation during droops. A greater expected droop means the margin must be larger, degrading the LDO's energy-efficiency benefits. So, speeding up the digital LDO's response to droops and overshoots is the primary focus of the cutting-edge research in this field. SOME RECENT ADVANCES have helped speed the circuit's response to droops and overshoots. One approach uses the digital LDO's clock frequency as a control knob to trade stability and power efficiency for response time. A lower frequency improves LDO stability, simply because the output will not be changing as often. It also lowers the LDO's power consumption, because the transistors that make up the LDO are switching less frequently. But this comes at the cost of a slower response to transient current demands from the processor core. You can see why that would be, if you consider that much of a transient event might occur within a single clock cycle if the frequency is too low. Conversely, a high LDO clock frequency reduces the transient response time, because the comparator is sampling the output often enough to change the LDO's output current earlier in the transient event. However, this constant sampling degrades the stability of the output and consumes more power. The gist of this approach is to introduce a clock whose frequency adapts to the situation, a scheme called adaptive sampling frequency with reduced dynamic stability. When voltage droops or overshoots exceed a certain level, the clock frequency increases to more rapidly reduce the transient effect. It then slows down to consume less power and keep the output voltage stable. This trick is achieved by adding a pair of additional comparators to sense the overshoot and droop conditions and trigger the clock. In measurements from a test chip using this technique, the VDD droop reduced from 210 to 90 millivolts—a 57 percent reduction versus a standard digital LDO design. And the time it took for voltage to settle to a steady state shrank to 1.1 microseconds from 5.8 µs, an 81 percent improvement. An alternative approach for improving the transient response time is to make the digital LDO a little bit analog. The design integrates a separate analog-assisted loop that responds instantly to load current transients. The analog-assisted loop couples the LDO's output voltage to the LDO's parallel PFETs through a capacitor, creating a feedback loop that engages only when there is a steep change in output voltage. So, when the output voltage droops, it reduces the voltage at the activated PFET gates and instantaneously increases current to the core to reduce the magnitude of the droop. Such an analog-assisted loop has been shown to reduce the droop from 300 to 106 mV, a 65 percent improvement, and overshoot from 80 to 70 mV (13 percent). An alternative way to make digital LDOs respond more quickly to voltage droops is to add an analog feedback loop to the power PFET part of the circuit [top]. When output voltage droops or overshoots, the analog loop engages to prop it up [bottom], reducing the extent of the excursion. Source: M. Huang et al., IEEE Journal of Solid-State Circuits, January 2018, pp. 20–34. Of course, both of these techniques have their drawbacks. For one, neither can really match the response time of today's analog LDOs. In addition, the adaptive sampling frequency technique requires two additional comparators and the generation and calibration of reference voltages for droop and overshoot, so the circuit knows when to engage the higher frequency. The analog-assisted loop includes some analog components, reducing the design-time benefit of an all-digital system. Developments in commercial SoC processors may help make digital LDOs more successful, even if they can't quite match analog performance. Today, commercial SoC processors integrate all-digital adaptive circuits designed to mitigate performance problems when droops occur. These circuits, for example, temporarily stretch the core's clock period to prevent timing errors. Such mitigation techniques could relax the transient response-time limits, allowing the use of digital LDOs and boosting processor efficiency. If that happens, we can expect more efficient smartphones and other computers, while making the process of designing them a whole lot easier.
  • No Antenna Could Survive Europa’s Brutal, Radioactive Environment—Until Now
    Jul 21, 2021 01:30 PM PDT
    Europa, one of Jupiter's Galilean moons, has twice as much liquid water as Earth's oceans, if not more. An ocean estimated to be anywhere from 40 to 100 miles (60 to 150 kilometers) deep spans the entire moon, locked beneath an icy surface over a dozen kilometers thick. The only direct evidence for this ocean is the plumes of water that occasionally erupt through cracks in the ice, jetting as high as 200 km above the surface. The endless, sunless, roiling ocean of Europa might sound astoundingly bleak. Yet it's one of the most promising candidates for finding extraterrestrial life. Designing a robotic lander that can survive such harsh conditions will require rethinking all of its systems to some extent, including arguably its most important: communications. After all, even if the rest of the lander works flawlessly, if the radio or antenna breaks, the lander is lost forever. Ultimately, when NASA's Jet Propulsion Laboratory (JPL), where I am a senior antenna engineer, began to seriously consider a Europa lander mission, we realized that the antenna was the limiting factor. The antenna needs to maintain a direct-to-Earth link across more than 550 million miles (900 million km) when Earth and Jupiter are at their point of greatest separation. The antenna must be radiation-hardened enough to survive an onslaught of ionizing particles from Jupiter, and it cannot be so heavy or so large that it would imperil the lander during takeoff and landing. One colleague, when we laid out the challenge in front of us, called it impossible. We built such an antenna anyway—and although it was designed for Europa, it is a revolutionary enough design that we're already successfully implementing it in future missions for other destinations in the solar system. Currently, the only planned mission to Europa is the Clipper orbiter, a NASA mission that will study the moon's chemistry and geology and will likely launch in 2024. Clipper will also conduct reconnaissance for a potential later mission to put a lander on Europa. At this time, any such lander is conceptual. NASA has still funded a Europa lander concept, however, because there are crucial new technologies that we need to develop for any successful mission on the icy world. Europa is unlike anywhere else we've attempted to land before. The antenna team, including the author (right), examine one of the antenna's subarrays. Each golden square is a unit cell in the antenna. JPL-Caltech/NASA For context, so far the only lander to explore the outer solar system is the European Space Agency's Huygens lander. It successfully descended to Saturn's moon Titan in 2005 after being carried by the Cassini orbiter. Much of our frame of reference for designing landers—and their antennas—comes from Mars landers. Traditionally, landers (and rovers) designed for Mars missions rely on relay orbiters with high data rates to get scientific data back to Earth in a timely manner. These orbiters, such as the Mars Reconnaissance Orbiter and Mars Odyssey, have large, parabolic antennas that use large amounts of power, on the order of 100 watts, to communicate with Earth. While the Perseverance and Curiosity rovers also have direct-to-Earth antennas, they are small, use less power (about 25 W), and are not very efficient. These antennas are mostly used for transmitting the rover's status and other low-data updates. These existing direct-to-Earth antennas simply aren't up to the task of communicating all the way from Europa. Additionally, Europa, unlike Mars, has virtually no atmosphere, so landers can't use parachutes or air resistance to slow down. Instead, the lander will depend entirely on rockets to brake and land safely. This necessity limits how big it can be—too heavy and it will require far too much fuel to both launch and land. A modestly sized 400-kilogram lander, for example, requires a rocket and fuel that combined weigh between 10 to 15 tonnes. The lander then needs to survive six or seven years of deep space travel before finally landing and operating within the intense radiation produced by Jupiter's powerful magnetic field. We also can't assume a Europa lander would have an orbiter overhead to relay signals, because adding an orbiter could very easily make the mission too expensive. Even if Clipper is miraculously still functional by the time a lander arrives, we won't assume that will be the case, as the lander would arrive well after Clipper's official end-of-mission date. JPL engineers, including the author (bottom row on left), pose with a mock-up of a Europa lander concept. The model includes several necessary technological developments, including the antenna on top and legs that can handle uneven terrain. JPL-Caltech/NASA I've mentioned previously that the antenna will need to transmit signals up to 900 million km. As a general rule, less efficient antennas need a larger surface area to transmit farther. But as the lander won't have an orbiter overhead with a large relay antenna, and it won't be big enough itself for a large antenna, it needs a small antenna with a transmission efficiency of 80 percent or higher—much more efficient than most space-bound antennas. So, to reiterate the challenge: The antenna cannot be large, because then the lander will be too heavy. It cannot be inefficient for the same reason, because requiring more power would necessitate bulky power systems instead. And it needs to survive exposure to a brutal amount of radiation from Jupiter. This last point requires that the antenna must be mostly, if not entirely, made out of metal, because metals are more resistant to ionizing radiation. The antenna we ultimately developed depends on a key innovation: The antenna is made up of circularly polarized, aluminum-only unit cells—more on this in a moment—that can each send and receive on X-band frequencies (specifically, 7.145 to 7.19 gigahertz for the uplink and 8.4 to 8.45 GHz for the downlink). The entire antenna is an array of these unit cells, 32 on a side or 1,024 in total. The antenna is 32.5 by 32.5 inches (82.5 by 82.5 centimeters), allowing it to fit on top of a modestly sized lander, and it can achieve a downlink rate to Earth of 33 kilobits per second at 80 percent efficiency. Let's take a closer look at the unit cells I mentioned, to better understand how this antenna does what it does. Circular polarization is commonly used for space communications. You might be more familiar with linear polarization, which is often used for terrestrial wireless signals; you can imagine such a signal propagating across a distance as a 2D sine wave that's oriented, say, vertically or horizontally relative to the ground. Circular polarization instead propagates as a 3D helix. This helix pattern makes circular polarization useful for deep space communications because the helix's larger “cross section" doesn't require that the transmitter and receiver be as precisely aligned. As you can imagine, a superprecise alignment across almost 750 million km is all but impossible. Circular polarization has the added benefit of being less sensitive to Earth's weather when it arrives. Rain, for example, causes linearly polarized signals to attenuate more quickly than circularly polarized ones. This exploded view of an 8-by-8 subarray of the antenna shows the unit cells (top layer) that work together to create steerable signal beams, and the three layers of the power divider sandwiched between the antenna's casing. JPL-Caltech/NASA Each unit cell, as mentioned, is entirely made of aluminum. Earlier antenna arrays that similarly use smaller component cells include dielectric materials like ceramic or glass to act as insulators. Unfortunately, dielectric materials are also vulnerable to Jupiter's ionizing radiation. The radiation builds up a charge on the materials over time, and precisely because they're insulators there's nowhere for that charge to go—until it's ultimately released in a hardware-damaging electrostatic discharge. So we can't use them. As mentioned before, metals are more resilient to ionizing radiation. The problem is they're not insulators, and so an antenna constructed entirely out of metal is still at risk of an electrostatic discharge damaging its components. We worked around this problem by designing each unit cell to be fed at a single point. The “feed" is the connection between an antenna and the radio's transmitter and receiver. Typically, circularly polarized antennas require two perpendicular feeds to control the signal generation. But with a bit of careful engineering and the use of a type of automated optimization called a genetic algorithm, we developed a precisely shaped single feed that could get the job done. Meanwhile, a comparatively large metal post acts as a ground to protect each feed from electrostatic discharges. The unit cells are placed in small 8-by-8 subarrays, 16 subarrays in total. Each of these subarrays is fed with something we call a suspended air stripline, in which the transmission line is suspended between two ground planes, turning the gap in between into a dielectric insulator. We can then safely transmit power through the stripline while still protecting the line from electric discharges that would build up on a dielectric like ceramic or glass. Additionally, suspended air striplines are low loss, which is perfect for the highly efficient antenna design we wanted. Put together, the new antenna design accomplishes three things: It's highly efficient, it can handle a large amount of power, and it's not very sensitive to temperature fluctuations. Removing traditional dielectric materials in favor of air striplines and an aluminum-only design gives us high efficiency. It's also a phased array, which means it uses a cluster of smaller antennas to create steerable, tightly focused signals. The nature of such an array is that each individual cell needs to handle only a fraction of the total transmission power. So while each individual cell can handle only a few watts, each subarray can handle more than 100 watts. And finally, because the antenna is made of metal, it expands and contracts uniformly as the temperature changes. In fact, one of the reasons we picked aluminum is because the metal does not expand or contract much as temperatures change. The power divider for an 8-by-8 subarray splits the signal power into a fraction that each unit cell can tolerate without being damaged. JPL-Caltech/NASA When I originally proposed this antenna concept to the Europa lander project, I was met with skepticism. Space exploration is typically a very risk-averse endeavor, for good reason—the missions are expensive, and a single mistake can end one prematurely. For this reason, new technologies may be dismissed in favor of tried-and-true methods. But this situation was different because without a new antenna design, there would never be a Europa mission. The rest of my team and I were given the green light to prove the antenna could work. Designing, fabricating, and testing the antenna took only 6 months. To put that in context, the typical development cycle for a new space technology is measured in years. The results were outstanding. Our antenna achieved the 80 percent efficiency threshold on both the send and receive frequency bands, despite being smaller and lighter than other antennas. In order to prove how successful our antenna could be, we subjected it to a battery of extreme environmental tests, including a handful of tests specific to Europa's atypical environment. One test is what we call thermal cycling. For this test, we place the antenna in a room called a thermal chamber and adjust the temperature over a large range—as low as –170 ℃ and as high as 150 ℃. We put the antenna through multiple temperature cycles, measuring its transmitting capabilities before, during, and after each cycle. The antenna passed this test without any issues. Each unit cell is pure aluminum. Collectively, they create a steerable signal by canceling out one another's signals in unwanted directions and reinforcing the signal in the desired direction. JPL-Caltech/NASA The antenna also needed to demonstrate, like any piece of hardware that goes into space, resilience against vibrations. Rockets—and everything they're carrying into space—shake intensely during launch, which means we need to be sure that anything that goes up doesn't come apart on the trip. For the vibration test, we loaded the entire antenna onto a vibrating table. We used accelerometers at different locations on the antenna to determine if it was holding up or breaking apart under the vibrations. Over the course of the test, we ramped up the vibrations to the point where they approximate a launch. Thermal cycling and vibration tests are standard tests for the hardware on any spacecraft, but as I mentioned, Europa's challenging environment required a few additional nonstandard tests. We typically do some tests in anechoic chambers for antennas. You may recognize anechoic chambers as those rooms with wedge-covered surfaces to absorb any signal reflections. An anechoic chamber makes it possible for us to determine the antenna's signal propagation over extremely long distances by eliminating interference from local reflections. One way to think about it is that the anechoic chamber simulates a wide open space, so we can measure the signal's propagation and extrapolate how it will look over a longer distance. What made this particular anechoic chamber test interesting is that it was also conducted at ultralow temperatures. We couldn't make the entire chamber that cold, so we instead placed the antenna in a sealed foam box. The foam is transparent to the antenna's radio transmissions, so from the point of view of the actual test, it wasn't there. But by connecting the foam box to a heat exchange plate filled with liquid nitrogen, we could lower the temperature inside it to –170 ℃. To our delight, we found that the antenna had robust long-range signal propagation even at that frigid temperature. The last unusual test for this antenna was to bombard it with electrons in order to simulate Jupiter's intense radiation. We used JPL's Dynamitron electron accelerator to subject the antenna to the entire ionizing radiation dose the antenna would see during its lifetime in a shortened time frame. In other words, in the span of two days in the accelerator, the antenna was exposed to the same amount of radiation as it would be during the six- or seven-year trip to Europa, plus up to 40 days on the surface. Like the anechoic chamber testing, we also conducted this test at cryogenic temperatures that were as close to those of Europa's surface conditions as possible. The antenna had to pass signal tests at cryogenic temperatures (–170 °C) to confirm that it would work as expected on Europa's frigid surface. Because it wasn't possible to bring the temperature of the entire anechoic chamber to cryogenic levels, the antenna was sealed in a white foam box. JPL-Caltech/NASA The reason for the electron bombardment test was our concern that Jupiter's ionizing radiation would cause a dangerous electrostatic discharge at the antenna's port, where it connects to the rest of the lander's communications hardware. Theoretically, the danger of such a discharge grows as the antenna spends more time exposed to ionizing radiation. If a discharge happens, it could damage not just the antenna but also hardware deeper in the communications system and possibly elsewhere in the lander. Thankfully, we didn't measure any discharges during our test, which confirms that the antenna can survive both the trip to and work on Europa. We designed and tested this antenna for Europa, but we believe it can be used for missions elsewhere in the solar system. We're already tweaking the design for the joint JPL/ESA Mars Sample Return mission that—as the name implies—will bring Martian rocks, soil, and atmospheric samples back to Earth. The mission is currently slated to launch in 2026. We see no reason why our antenna design couldn't be used on every future Mars lander or rover as a more robust alternative—one that could also increase data rates 4 to 16 times those of current antenna designs. We also could use it on future moon missions to provide high data rates. Although there isn't an approved Europa lander mission yet, we at JPL will be ready if and when it happens. Other engineers have pursued different projects that are also necessary for such a mission. For example, some have developed a new, multilegged landing system to touch down safely on uncertain or unstable surfaces. Others have created a “belly pan" that will protect vulnerable hardware from Europa's cold. Still others have worked on an intelligent landing system, radiation-tolerant batteries, and more. But the antenna remains perhaps the most vital system, because without it there will be no way for the lander to communicate how well any of these other systems are working. Without a working antenna, the lander will never be able to tell us whether we could have living neighbors on Europa. This article appears in the August 2021 print issue as “An Antenna Made for an Icy, Radioactive Hell." During the editorial process some errors were introduced to this article and have been corrected on 27 July 2021. We originally misstated the amount of power used by Mars orbiters and the Europa antenna design, as well as the number of unit cells in each subarray. We also incorrectly suggested that the Europa antenna design would not require a gimbal or need to reorient itself in order to stay in contact with Earth.
  • Stratospheric Balloons Take Monitoring and Surveillance to New Heights
    Jul 21, 2021 01:00 PM PDT
    Alphabet's enthusiasm for balloons deflated earlier this year, when it announced that its high-altitude Internet company, Loon, could not become commercially viable. But while the stratosphere might not be a great place to put a cellphone tower, it could be the sweet spot for cameras, argue a host of high-tech startups. The market for Earth-observation services from satellites is expected to top US $4 billion by 2025, as orbiting cameras, radars, and other devices monitor crops, assess infrastructure, and detect greenhouse gas emissions. Low-altitude observations from drones could be worth. Neither platform is perfect. Satellites can cover huge swaths of the planet but remain expensive to develop, launch, and operate. Their cameras are also hundreds of kilometers from the things they are trying to see, and often moving at tens of thousands of kilometers per hour. Drones, on the other hand, can take supersharp images, but only over a relatively small area. They also need careful human piloting to coexist with planes and helicopters. Click here to see larger. StoryTK Balloons in the stratosphere, 20 kilometers above Earth (and 10 km above most jets), split the difference. They are high enough not to bother other aircraft and yet low enough to observe broad areas in plenty of detail. For a fraction of the price of a satellite, an operator can launch a balloon that lasts for weeks (even months), carrying large, capable sensors. Unsurprisingly, perhaps, the U.S. military has funded development in stratospheric balloon tests across six Midwest states to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats." But the Pentagon is far from the only organization flying high. An IEEE Spectrum analysis of applications filed with the U.S. Federal Communications Commission reveals at least six companies conducting observation experiments in the stratosphere. Some are testing the communications, navigation, and flight infrastructure required for such balloons. Others are running trials for commercial, government, and military customers. The illustration above depicts experimental test permits granted by the FCC from January 2020 to June 2021, together covering much of the continental United States. Some tests were for only a matter of hours; others spanned days or more.
  • This Huge DIY Workbench Gives You a Hand
    Jul 21, 2021 12:30 PM PDT
    As an avid experimenter and builder of random contraptions—and who isn't the best at putting his tools away and normally has multiple projects in various stages of completion—I often run out of work space. So I decided to build a new workbench. One that would be better, not just because it was bigger but because it would be smarter. A bench that could automatically assist me in getting things done! In my garage I previously had two main work spaces: a 183-by-76-centimeter butcher block that also houses a small milling machine, and a custom 147-by-57-cm work space with a built-in router that pops out as needed. Though this space is generous by most standards, it seems I always needed “just a bit" more. After some consideration, I purchased a 2x4basics custom workbench kit (which provides the bench's heavy-gauge structural resin supports) and lumber to form the main structure, and then cut slabs of chipboard to form a top and a bottom surface. I decided on building a 213-by-107-cm bench. This was the largest space that I could reasonably reach across and also fit in my garage without blocking movement. The 2x4basics kit came with shelves, providing space for plastic storage boxes. At this point, I thought I was done, because surely this bench would be simply something that I built and used—a background thing that needs no more mention than a screwdriver or hammer would. As it turns out, I can't leave well enough alone. The initial tweaks were small. To enhance the bench's storage, I added magnets on which to hang various tools, and augmented my existing storage cases with 3D-printed dividers. Then I added an eyebolt for my air compressor—a fabulous tool for its roughly US $40 price—to keep it at the ready for blowing off excess material. Toward the back of the bench rests a hot-air gun and a soldering station, as well as my bag of other electrical tools. The solder squid (left) uses an EZ Fan board and a motion sensor to control a fan. The bench lights are controlled using an Arduino Nano (far right)inserted into another custom board, the Grounduino (middle), which also provides a dedicated space for the recommended large capacitor when driving addressable LED strips. James Provost Then things got more complex. I added a DIY solder squid—a block with four flexible arms that I use to hold components in place while soldering—with a concrete base and an automatic solder fume extractor. Yes, my solder squid is made out of concrete, via a 3D-printed mold—though that last refinement is perhaps optional. You could make nearly the same sort of brick using a plastic storage container. Heavy, cheap, and nonconductive, concrete is the perfect base material for such a device, and for arms you simply need to stick a few coolant lines in while the concrete cures. Two of the arms have alligator clips attached, one has a larger clamp, and the third has an old PC fan, recycled for my fume extractor. I automated the fan by hooking up a rechargeable battery, a USB charger board, and a passive infrared (PIR) motion sensor. When activated by soldering movements, the PIR sensor turns the fan on with the help of a leftover original EZ Fan transistor board. (I created the EZ Fan board to control add-on cooling fans for Raspberry Pi computers, and now sell an even slimmer version.) This means that I don't ever have to remember to turn the fan on or off: It just comes on when it senses that I'm soldering. I normally keep it plugged into a USB port that provides power, but there is also a battery inside for when a USB port isn't available. For light, I initially just used a linkage-based desk lamp with a powerful three-lobe LED bulb. But why stop there? Why not apply strips of LEDs to the underside of the overhead storage? I did just that, pulling out a strip of 12-volt nonaddressable LEDs and powering them with a simple wall power adapter. This gave things a constant glow, but it was only a matter of time until addressable LEDs made an appearance, which would let me illuminate different zones as desired. I mounted one PIR sensor at the end of a piece of pipe and one in the middle, and then strung a strip of WS2812B RGB addressable lights along the length. I attached this to the overhead shelves with pipe hangers, which let me adjust the lighting angle as needed to complement the static white LEDs. To control both the addressable LEDs and the nonaddressable strip, I used an Arduino Nano plugged into another utility board of my own creation, the Grounduino, and connected another PIR sensor to it, giving me three sensors along the length. The Grounduino provides screw terminals for hooking wires to the Nano and, as the name suggests, five extra ground connections (and five extra 5V connections as well). It also has built-in accommodation for the recommended capacitor that others often forget to use with WSx addressable LED lights. Three infrared sensors that detect motion are spaced along the bench sothat my work zone is always automatically illuminated. James Provost Probably the biggest challenge here was actually fishing the various wires through the length of pipe, but in the end it worked quite well. Three segments of addressable LEDs turn on based on which PIR sensor is triggered, while the 12V nonaddressable strip is powered via a FQP30N06L metal-oxide-semiconductor field-effect transistor (MOSFET) under control of the Arduino (the power required is just a little on the high side for an EZ Fan board). A push-button control lets me alter the brightness of the strips using pulse-width modulation. If I was starting from scratch, I'd use a single LED voltage, as my setup currently has two power transformers (12V and 5V). Hindsight is 20/20, though it's very possible this project isn't quite done yet. I use open-source Home Assistant software to turn on house lights over Wi-Fi, and a homemade ESP8266 contraption to link the same system to my garage door, so why not my bench lights? The Grounduino and Nano were good choices here, but with an ESP8266, I could potentially automate everything and/or control it all with my phone if needed… However, for now at least, I can finally fit my projects, and my tools, on one bench! This article appears in the August 2021 print issue as “This Huge Workbench Gives You a Hand."
  • We Don’t Need a Jetsons Future, Just a Sustainable One
    Jul 21, 2021 12:00 PM PDT
    For decades, our vision of the future has been stuck in a 1960s-era dream of science fiction embodied by The Jetsons and space travel. But that isn't what we need right now. In fact, what if our vision of that particular technologically advanced future is all wrong? What if, instead of self-driving cars, digital assistants whispering in our ears, and virtual-reality glasses, we viewed a technologically advanced society as one where everyone had sustainable housing? Where we could manage and then reduce the amount of carbon in our atmosphere? Where everyone had access to preventative health care that was both personalized and less invasive? What we need is something called cozy futurism, a concept I first encountered while reading a blog post by software engineer Jose Luis Ricón Fernández de la Puente. In the post, he calls for a vision of technology that looks at human needs and attempts to meet those needs, not only through technologies but also cultural shifts and policy changes. Take space travel as an example. Much of the motivation behind building new rockets or developing colonies on Mars is wrapped up in the rhetoric of our warming planet being something to escape from. In doing so, we miss opportunities to fix our home rather than flee it. But we can change our attitudes. What's more, we are changing. Climate change is a great example. Albeit slowly, entrepreneurs who helped build out the products and services over the tech boom of the past 20 years are now searching for technologies to address the crisis. Jason Jacobs, the founder of the fitness app Runkeeper, has created an entire media business called My Climate Journey to find and help recruit tech folks to address climate change. Last year, Jeff Bezos created a US $10 billion fund to make investments in organizations fighting climate change. Bill Gates wrote an entire book, How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need. Mitigating climate change is an easy way to understand the goals of cozy futurism, but I'm eager to see us all go further. Mitigating climate change is an easy way to understand the goals of cozy futurism, but I'm eager to see us all go further. What about reducing pollution in urban and poor communities? Nonprofits are already using cheap sensors to pinpoint heat islands in cities, or neighborhoods where air pollution disproportionately affects communities of color. With this information, policy changes can lighten the unfair distribution of harm. And perhaps if we see the evidence of harm in data, more people will vote to attack pollution, climate change, and other problems at their sources, rather than looking to tech to put a Band-Aid on them or mitigate the effects—or worse, adding to the problem by producing a never-ending stream of throwaway gadgets. We should instead embrace tech as a tool to help governments hold companies accountable for meeting policy goals. Cozy futurism is an opportunity to reframe the best use of technology as something actively working to help humanity—not individually, like a smartwatch monitoring your health or self-driving cars easing your commute, but in aggregate. That's not to say we should do away with VR goggles or smart gadgets, but we should think a bit more about how and why we're using them, and whether we're overprioritizing them. After all, what's better than demonstrating that the existential challenges facing us all are things we can find solutions to, not just for those who can hitch a ride off-world but for everyone. After all, I'd rather be cozy on Earth than stuck in a bubble on Mars. This article appears in the August 2021 print issue as “Cozy Futurism."
  • How the IBM PC Won, Then Lost, the Personal Computer Market
    Jul 21, 2021 11:30 AM PDT
    On 12 August 1981, at the Waldorf Astoria Hotel in midtown Manhattan, IBM unveiled the company's entrant into the nascent personal computer market: the IBM PC. With that, the preeminent U.S. computer maker launched another revolution in computing, though few realized it at the time. Press coverage of the announcement was lukewarm. Soon, though, the world began embracing little computers by the millions, with IBM dominating those sales. The personal computer vastly expanded the number of people and organizations that used computers. Other companies, including Apple and Tandy Corp., were already making personal computers, but no other machine carried the revered IBM name. IBM's essential contributions were to position the technology as suitable for wide use and to set a technology standard. Rivals were compelled to meet a demand that they had all grossly underestimated. As such, IBM had a greater effect on the PC's acceptance than did Apple, Compaq, Dell, and even Microsoft. Despite this initial dominance, by 1986 the IBM PC was becoming an also-ran. And in 2005, the Chinese computer maker Lenovo Group purchased IBM's PC business. What occurred between IBM's wildly successful entry into the personal computer business and its inglorious exit nearly a quarter century later? From IBM's perspective, a new and vast market quickly turned into an ugly battleground with many rivals. The company stumbled badly, its bureaucratic approach to product development no match for a fast-moving field. Over time, it became clear that the sad story of the IBM PC mirrored the decline of the company. At the outset, though, things looked rosy. How the personal computer revolution was launched IBM did not invent the desktop computer. Most historians agree that the personal computer revolution began in April 1977 at the first West Coast Computer Faire. Here, Steve Jobs introduced the Apple II, with a price tag of US $1,298 (about $5,800 today), while rival Commodore unveiled its PET. Both machines were designed for consumers, not just hobbyists or the technically skilled. In August, Tandy launched its TRS-80, which came with games. Indeed, software for these new machines was largely limited to games and a few programming tools. Apple cofounder Steve Jobs unveiled the Apple II at the West Coast Computer Faire in April 1977. Tom Munnecke/Getty Images IBM's large commercial customers faced the implications of this emerging technology: Who would maintain the equipment and its software? How secure was the data in these machines? And what was IBM's position: Should personal computers be taken seriously or not? By 1980, customers in many industries were telling their IBM contacts to enter the fray. At IBM plants in San Diego, Endicott, N.Y, and Poughkeepsie, N.Y., engineers were forming hobby clubs to learn about the new machines. The logical place to build a small computer was inside IBM's General Products Division, which focused on minicomputers and the successful typewriter business. But the division had no budget or people to allocate to another machine. IBM CEO Frank T. Cary decided to fund the PC's development out of his own budget. He turned to William “Bill" Lowe, who had given some thought to the design of such a machine. Lowe reported directly to Cary, bypassing IBM's complex product-development bureaucracy, which had grown massively during the creation of the System/360 and S/370. The normal process to get a new product to market took four or five years, but the incipient PC market was moving too quickly for that. IBM CEO Frank T. Cary authorized a secret initiative to develop a personal computer outside of Big Blue's product-development process. IBM Cary asked Lowe to come back in several months with a plan for developing a machine within a year and to find 40 people from across IBM and relocate them to Boca Raton, Fla. Lowe's plan for the PC called for buying existing components and software and bolting them together into a package aimed at the consumer market. There would be no homegrown operating system or IBM-made chips. The product also had to attract corporate customers, although it was unclear how many of those there would be. Mainframe salesmen could be expected to ignore or oppose the PC, so the project was kept reasonably secret. A friend of Lowe's, Jack Sams, was a software engineer who vaguely knew Bill Gates, and he reached out to the 24-year-old Gates to see if he had an operating system that might work for the new PC. Gates had dropped out of Harvard to get into the microcomputer business, and he ran a 31-person company called Microsoft. While he thought of programming as an intellectual exercise, Gates also had a sharp eye for business. In July 1980, the IBMers met with Gates but were not greatly impressed, so they turned instead to Gary Kildall, president of Digital Research, the most recognized microcomputer software company at the time. Kildall then made what may have been the business error of the century. He blew off the blue-suiters so that he could fly his airplane, leaving his wife—a lawyer—to deal with them. The meeting went nowhere, with too much haggling over nondisclosure agreements, and the IBMers left. Gates was now their only option, and he took the IBMers seriously. The normal process to get a new IBM product to market took four or five years, but the incipient PC market was moving too quickly for that. That August, Lowe presented his plan to Cary and the rest of the management committee at IBM headquarters in Armonk, N.Y. The idea of putting together a PC outside of IBM's development process disturbed some committee members. The committee knew that IBM had previously failed with its own tiny machines—specifically the Datamaster and the 5110—but Lowe was offering an alternative strategy and already had Cary's support. They approved Lowe's plan. Lowe negotiated terms, volumes, and delivery dates with suppliers, including Gates. To meet IBM's deadline, Gates concluded that Microsoft could not write an operating system from scratch, so he acquired one called QDOS (“quick and dirty operating system") that could be adapted. IBM wanted Microsoft, not the team in Boca Raton, to have responsibility for making the operating system work. That meant Microsoft retained the rights to the operating system. Microsoft paid $75,000 for QDOS. By the early 1990s, that investment had boosted the firm's worth to $27 billion. IBM's strategic error in not retaining rights to the operating system went far beyond that $27 billion; it meant that Microsoft would set the standards for the PC operating system. In fairness to IBM, nobody thought the PC business would become so big. Gates said later that he had been “lucky." Back at Boca Raton, the pieces started coming together. The team designed the new product, lined up suppliers, and were ready to introduce the IBM Personal Computer just a year after gaining the management committee's approval. How was IBM able to do this? Much credit goes to Philip Donald Estridge. An engineering manager known for bucking company norms, Estridge turned out to be the perfect choice to ram this project through. He wouldn't show up at product-development review meetings or return phone calls. He made decisions quickly and told Lowe and Cary about them later. He staffed up with like-minded rebels, later nicknamed the “Dirty Dozen." In the fall of 1980, Lowe moved on to a new job at IBM, so Estridge was now in charge. He obtained 8088 microprocessors from Intel, made sure Microsoft kept the development of DOS secret, and quashed rumors that IBM was building a system. The Boca Raton team put in long hours and built a beautiful machine. The IBM PC was a near-instant success The big day came on 12 August 1981. Estridge wondered if anyone would show up at the Waldorf Astoria. After all, the PC was a small product, not in IBM's traditional space. Some 100 people crowded into the hotel. Estridge described the PC, had one there to demonstrate, and answered a few questions. The IBM PC was aimed squarely at the business market, which compelled other computer makers to follow suit. IBM Meanwhile, IBM salesmen had received packets of materials the previous day. On 12 August, branch managers introduced the PC to employees and then met with customers to do the same. Salesmen weren't given sample machines. Along with their customers, they collectively scratched their heads, wondering how they could use the new computer. For most customers and IBMers, it was a new world. Nobody predicted what would happen next. The first shipments began in October 1981, and in its first year, the IBM PC generated $1 billion in revenue, far exceeding company projections. IBM's original manufacturing forecasts called for 1 million machines over three years, with 200,000 the first year. In reality, customers were buying 200,000 PCs per month by the second year. Those who ordered the first PCs got what looked to be something pretty clever. It could run various software packages and a nice collection of commercial and consumer tools, including the accessible BASIC programming language. Whimsical ads for the PC starred Charlie Chaplin's Little Tramp and carried the tag line “A Tool for Modern Times." People could buy the machines at ComputerLand, a popular retail chain in the United States. For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but in fact a new class of serious computing. Corporate users who did not want to rely on their company's centralized data centers began turning to these new machines. Estridge and his team were busy acquiring games and business software for the PC. They lined up Lotus Development Corp. to provide its 1-2-3 spreadsheet package; other software products followed from multiple suppliers. As developers began writing software for the IBM PC, they embraced DOS as the industry standard. IBM's competitors, too, increasingly had to use DOS and Intel chips. And Cary's decision to avoid the product-development bureaucracy had paid off handsomely. IBM couldn't keep up with rivals in the PC market Encouraged by their success, the IBMers in Boca Raton released a sequel to the PC in early 1983, called the XT. In 1984 came the XT's successor, the AT. That machine would be the last PC designed outside IBM's development process. John Opel, who had succeeded Cary as CEO in January 1981, endorsed reining in the PC business. During his tenure, Opel remained out of touch with the PC and did not fully understand the significance of the technology. We could conclude that Opel did not need to know much about the PC because business overall was outstanding. IBM's revenue reached $29 billion in 1981 and climbed to $46 billion in 1984. The company was routinely ranked as one of the best run. IBM's stock more than doubled, making IBM the most valuable company in the world. The media only wanted to talk about the PC. On its 3 January 1983 cover, Time featured the personal computer, rather than its usual Man of the Year. IBM customers, too, were falling in love with the new machines, ignoring IBM's other lines of business—mainframes, minicomputers, and typewriters. Don Estridge was the right person to lead the skunkworks in Boca Raton, Fla., where the IBM PC was built. IBM On 1 August 1983, Estridge's skunkworks was redesignated the Entry Systems Division (ESD), which meant that the PC business was now ensnared in the bureaucracy that Cary had bypassed. Estridge's 4,000-person group mushroomed to 10,000. He protested that Corporate had transferred thousands of programmers to him who knew nothing about PCs. PC programmers needed the same kind of machine-software knowledge that mainframe programmers in the 1950s had; both had to figure out how to cram software into small memories to do useful work. By the 1970s, mainframe programmers could not think small enough. Estridge faced incessant calls to report on his activities in Armonk, diverting his attention away from the PC business and slowing development of new products even as rivals began to speed up introduction of their own offerings. Nevertheless, in August 1984, his group managed to release the AT, which had been designed before the reorganization. But IBM blundered with its first product for the home computing market: the PCjr (pronounced “PC junior"). The company had no experience with this audience, and as soon as IBM salesmen and prospective customers got a glimpse of the machine, they knew something had gone terribly wrong. Unlike the original PC, the XT, and the AT, the PCjr was the sorry product of IBM's multilayered development and review process. Rumors inside IBM suggested that the company had spent $250 million to develop it. The computer's tiny keyboard was scornfully nicknamed the “Chiclet keyboard." Much of the PCjr's software, peripheral equipment, memory boards, and other extensions were incompatible with other IBM PCs. Salesmen ignored it, not wanting to make a bad recommendation to customers. IBM lowered the PCjr's price, added functions, and tried to persuade dealers to promote it, to no avail. ESD even offered the machines to employees as potential Christmas presents for a few hundred dollars, but that ploy also failed. IBM's relations with its two most important vendors, Intel and Microsoft, remained contentious. Both Microsoft and Intel made a fortune selling IBM's competitors the same products they sold to IBM. Rivals figured out that IBM had set the de facto technical standards for PCs, so they developed compatible versions they could bring to market more quickly and sell for less. Vendors like AT&T, Digital Equipment Corp., and Wang Laboratories failed to appreciate that insight about standards, and they suffered. (The notable exception was Apple, which set its own standards and retained its small market share for years.) As the prices of PC clones kept falling, the machines grew more powerful—Moore's Law at work. By the mid-1980s, IBM was reacting to the market rather than setting the pace. For some corporate customers, the fact that IBM now had a personal computing product meant that these little machines were not some crazy geek-hippie fad but were in fact a new class of serious computing. Estridge was not getting along with senior executives at IBM, particularly those on the mainframe side of the house. In early 1985, Opel made Bill Lowe head of the PC business. Then disaster struck. On 2 August 1985, Estridge, his wife, Mary Ann, and a handful of IBM salesmen from Los Angeles boarded Delta Flight 191 headed to Dallas. Over the Dallas airport, 700 feet off the ground, a strong downdraft slammed the plane to the ground, killing 137 people including the Estridges and all but one of the other IBM employees. IBMers were in shock. Despite his troubles with senior management, Estridge had been popular and highly respected. Not since the death of Thomas J. Watson Sr. nearly 30 years earlier had employees been so stunned by a death within IBM. Hundreds of employees attended the Estridges' funeral. The magic of the PC may have died before the airplane crash, but the tragedy at Dallas confirmed it. More missteps doomed the IBM PC and its OS/2 operating system While IBM continued to sell millions of personal computers, over time the profit on its PC business declined. IBM's share of the PC market shrank from roughly 80 percent in 1982–1983 to 20 percent a decade later. Meanwhile, IBM was collaborating with Microsoft on a new operating system, OS/2, even as Microsoft was working on Windows, its replacement for DOS. The two companies haggled over royalty payments and how to work on OS/2. By 1987, IBM had over a thousand programmers assigned to the project and to developing telecommunications, costing an estimated $125 million a year. OS/2 finally came out in late 1987, priced at $340, plus $2,000 for additional memory to run it. By then, Windows had been on the market for two years and was proving hugely popular. Application software for OS/2 took another year to come to market, and even then the new operating system didn't catch on. As the business writer Paul Carroll put it, OS/2 began to acquire “the smell of failure." Known to few outside of IBM and Microsoft, Gates had offered to sell IBM a portion of his company in mid-1986. It was already clear that Microsoft was going to become one of the most successful firms in the industry. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM's history up to then, following his first one of not insisting on proprietary rights to Microsoft's DOS or the Intel chip used in the PC. The purchase price probably would have been around $100 million in 1986, an amount that by 1993 would have yielded a return of $3 billion and in subsequent decades orders of magnitude more. In fairness to Lowe, he was nervous that such an acquisition might trigger antitrust concerns at the U.S. Department of Justice. But the Reagan administration was not inclined to tamper with the affairs of large multinational corporations. Gates offered to sell IBM a portion of Microsoft in mid-1986. But Lowe declined the offer, making what was perhaps the second-biggest mistake in IBM's history up to then. More to the point, Lowe, Opel, and other senior executives did not understand the PC market. Lowe believed that PCs, and especially their software, should undergo the same rigorous testing as the rest of the company's products. That meant not introducing software until it was as close to bugproof as possible. All other PC software developers valued speed to market over quality—better to get something out sooner that worked pretty well, let users identify problems, and then fix them quickly. Lowe was aghast at that strategy. Salesmen came forward with proposals to sell PCs in bulk at discounted prices but got pushback. The sales team I managed arranged to sell 6,000 PCs to American Standard, a maker of bathroom fixtures. But it took more than a year and scores of meetings for IBM's contract and legal teams to authorize the terms. Lowe's team was also slow to embrace the faster chips that Intel was producing, most notably the 80386. The new Intel chip had just the right speed and functionality for the next generation of computers. Even as rivals moved to the 386, IBM remained wedded to the slower 286 chip. As the PC market matured, the gold rush of the late 1970s and early 1980s gave way to a more stable market. A large software industry grew up. Customers found the PC clones, software, and networking tools to be just as good as IBM's products. The cost of performing a calculation on a PC dropped so much that it was often significantly cheaper to use a little machine than a mainframe. Corporate customers were beginning to understand that economic reality. Opel retired in 1986, and John F. Akers inherited the company's sagging fortunes. Akers recognized that the mainframe business had entered a long, slow decline, the PC business had gone into a more rapid fall, and the move to billable services was just beginning. He decided to trim the ranks by offering an early retirement program. But too many employees took the buyout, including too many of the company's best and brightest. In 1995, IBM CEO Louis V. Gerstner Jr. finally pulled the plug on OS/2. It did not matter that Microsoft's software was notorious for having bugs or that IBM's was far cleaner. As Gerstner noted in his 2002 book, “What my colleagues seemed unwilling or unable to accept was that the war was already over and was a resounding defeat—90 percent market share for Windows to OS/2's 5 percent or 6 percent." The end of the IBM PC IBM soldiered on with the PC until Samuel J. Palmisano, who once worked in the PC organization, became CEO in 2002. IBM was still the third-largest producer of personal computers, including laptops, but PCs had become a commodity business, and the company struggled to turn a profit from those products. Palmisano and his senior executives had the courage to set aside any emotional attachments to their “Tool for Modern Times" and end it. In December 2004, IBM announced it was selling its PC business to Lenovo for $1.75 billion. As the New York Times explained, the sale “signals a recognition by IBM, the prototypical American multinational, that its own future lies even further up the economic ladder, in technology services and consulting, in software and in the larger computers that power corporate networks and the Internet. All are businesses far more profitable for IBM than its personal computer unit." As soon as IBM salesmen and prospective customers got a glimpse of the IBM PCjr, they knew something had gone terribly wrong. IBM already owned 19 percent of Lenovo, which would continue for three years under the deal, with an option to acquire more shares. The head of Lenovo's PC business would be IBM senior vice president Stephen M. Ward Jr., while his new boss would be Lenovo's chairman, Yang Yuanquing. Lenovo got a five-year license to use the IBM brand on the popular Thinkpad laptops and PCs, and to hire IBM employees to support existing customers in the West, where Lenovo was virtually unknown. IBM would continue to design new laptops for Lenovo in Raleigh, N.C. Some 4,000 IBMers already working in China would switch to Lenovo, along with 6,000 in the United States. The deal ensured that IBM's global customers had familiar support while providing a stable flow of maintenance revenue to IBM for five years. For Lenovo, the deal provided a high-profile partner. Palmisano wanted to expand IBM's IT services business to Chinese corporations and government agencies. Now the company was partnered with China's largest computer manufacturer, which controlled 27 percent of the Chinese PC market. The deal was one of the most creative in IBM's history. And yet it remained for many IBMers a sad close to the quarter-century chapter of the PC. This article is based on excerpts from IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the August 2021 print issue as “A Tool for Modern Times." The Essential Question James W. Cortada at the IBM building in Cranford, N.J., in the late 1970s. James W. Cortada How many IBM PCs can you fit in an 18-wheeler? That, according to historian James W. Cortada, is the most interesting question he's ever asked. He first raised the question in 1985, several years after IBM had introduced its wildly successful personal computer. Cortada was then head of a sales team at IBM's Nashville site. “We'd arranged to sell 6,000 PCs to American Standard. They agreed to send their trucks to pick up a certain number of PCs every month. So we needed to know how many PCs would fit," Cortada explains. “I can't even remember what the answer was, only that I was delighted that I thought to ask the question." Cortada worked in various capacities at IBM for 38 years. (That's him in the parking lot of IBM's distinctive building in Cranford, N.J., designed by Victor Lundy.) After he retired in 2012, he became a senior research fellow at the University of Minnesota's Charles Babbage Institute, where he specializes in the history of technology. That transition might seem odd, but shortly before he joined IBM, Cortada had earned a Ph.D. in modern European history from Florida State University. And he continued to research, write, and publish during his IBM career. IEEE Spectrum This month's Past Forward describes the 1981 launch of the IBM PC. It's drawn from Cortada's award-winning history of Big Blue: IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). “I was able to take advantage of the normal skills of a trained historian," Cortada says. “And I had witnessed a third of IBM's history. I knew what questions to ask. I knew the skeletons in the closet." Even before he started the book, a big question was whether he'd reveal those skeletons or not. “I decided to be candid," Cortada says. “I didn't want my grandsons to be embarrassed about what I wrote."
  • U.S. Mint Honors Game Developer Ralph Baer
    Jul 21, 2021 11:00 AM PDT
    THE INSTITUTE Gamers and coin collectors alike can now celebrate Ralph Baer's contributions with an American Innovation dollar from the U.S. Mint. Baer, an IEEE Fellow who is considered the father of the video game, developed the Brown Box, which paved the way for modern home video game consoles including the PlayStation and Xbox. The Brown Box offered table tennis, football, and other games. It let people play on almost any television and thus spawned the commercialization of interactive video games. Baer's Brown Box on display at the Innovation Wing at the Smithsonian Institution, in Washington, D.C. Kris Connor/Getty Images The New Hampshire American Innovation coin, which recognizes the first in-home video game console, mimics an arcade token. It depicts a Brown Box game—handball—on one side. On the other side, the words New Hampshire and Player 1 are engraved on a stamped background. The words In-home video game system and Baer's name encircle the outside in text that is meant to pay homage to Baer's Odyssey game. The coin “honors a story wherein an individual, Ralph H. Baer, made a great and positive difference in our lives and that would not have happened without the time, place, and opportunity that his life in America presented," his son Mark said in an interview with the Manchester, N.H., Ink Link. “It is good to keep that in mind, particularly in these divisive times. To be sure, we have a lot to be thankful for and a lot to celebrate." The mint began the American Innovation dollar coin series in 2018 to showcase innovations from particular states or territories. The series is scheduled to run through 2032. THE HISTORY OF THE FIRST CONSOLE Baer sketched out his idea for the gaming console in 1966 outside the Port Authority Bus Terminal in New York City. He brought his idea to Sanders Associates—now part of BAE Systems—a defense contractor in Nashua, N.H., where he worked. An intrigued manager gave Baer US $2,500 for materials and assigned two engineers from the company to help him develop a prototype. The Brown Box, a soundless multiplayer system, included clear plastic overlay sheets that could be taped to the player's TV screen to add color, playing fields, and other graphics. The console ran games off printed-circuit-board cartridges. In 1968 the company licensed the system to television maker Magnavox, which named it the Odyssey. The company offered it in the United States in 1972 and sold 130,000 units the first year. Baer's workshop on display in the Innovation Wing at the Smithsonian Institution, in Washington, D.C. Kris Connor/Getty Images Baer's 1971 patent on a “television gaming and training apparatus," the first U.S. patent for video game technology, was based on the Brown Box. He received the 2014 IEEE Edison Medal “for pioneering and fundamental contributions to the video-game and interactive multimedia-content industries." The medal is sponsored by Samsung. The console was named an IEEE Milestone in 2015. Administered by the IEEE History Center, the Milestone program recognizes outstanding technical developments from around the world. Baer's original video games are on display in the Innovation Wing at the Smithsonian Institution, in Washington, D.C. IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals.
  • Nothing Can Keep This Drone Down
    Jul 21, 2021 09:52 AM PDT
    When life knocks you down, you’ve got to get back up. Ladybugs take this advice seriously in the most literal sense. If caught on their backs, the insects are able to use their tough exterior wings, called elytra (of late made famous in the game Minecraft), to self-right themselves in just a fraction of a second. Inspired by this approach, researchers have created self-righting drones with artificial elytra. Simulations and experiments show that the artificial elytra can not only help salvage fixed-wing drones from compromising positions, but also improve the aerodynamics of the vehicles during flight. The results are described in a study published July 9 in IEEE Robotics and Automation Letters. Charalampos Vourtsis is a doctoral assistant at the Laboratory of Intelligent Systems, Ecole Polytechnique Federale de Lausanne in Switzerland who co-created the new design. He notes that beetles, including ladybugs, have existed for tens of millions of years. “Over that time, they have developed several survival mechanisms that we found to be a source of inspiration for applications in modern robotics,” he says. His team was particularly intrigued by beetles’ elytra, which for ladybugs are their famous black-spotted, red exterior wing. Underneath the elytra is the hind wing, the semi-transparent appendage that’s actually used for flight. When stuck on their backs, ladybugs use their elytra to stabilize themselves, and then thrust their legs or hind wings in order to pitch over and self-right. Vourtsis’ team designed Micro Aerial Vehicles (MAVs) that use a similar technique, but with actuators to provide the self-righting force. “Similar to the insect, the artificial elytra feature degrees of freedom that allow them to reorient the vehicle if it flips over or lands upside down,” explains Vourtsis. The researchers created and tested artificial elytra of different lengths (11, 14 and 17 centimeters) and torques to determine the most effective combination for self-righting a fixed-wing drone. While torque had little impact on performance, the length of elytra was found to be influential. On a flat, hard surface, the shorter elytra lengths yielded mixed results. However, the longer length was associated with a perfect success rate. The longer elytra were then tested on different inclines of 10°, 20° and 30°, and at different orientations. The drones used the elytra to self-right themselves in all scenarios, except for one position at the steepest incline. The design was also tested on seven different terrains: pavement, course sand, fine sand, rocks, shells, wood chips and grass. The drones were able to self-right with a perfect success rate across all terrains, with the exception of grass and fine sand. Vourtsis notes that the current design was made from widely available materials and a simple scale model of the beetle’s elytra—but further optimization may help the drones self-right on these more difficult terrains. As an added bonus, the elytra were found to add non-negligible lift during flight, which offsets their weight. Vourtsis says his team hopes to benefit from other design features of the beetles’ elytra. “We are currently investigating elytra for protecting folding wings when the drone moves on the ground among bushes, stones, and other obstacles, just like beetles do,” explains Vourtsis. “That would enable drones to fly long distances with large, unfolded wings, and safely land and locomote in a compact format in narrow spaces.”
  • Cuba Jamming Ham Radio? Listen For Yourself
    Jul 21, 2021 08:18 AM PDT
    As anti-government protests spilled onto the streets in Cuba on July 11, something strange was happening on the airwaves. Amateur radio operators in the United States found that suddenly parts of the popular 40-meter band were being swamped with grating signals. Florida operators reported the signals were loudest there, enough to make communication with hams in Cuba impossible. Other operators in South America, Africa, and Europe also reported hearing the signal, and triangulation software that anyone with a web browser can try placed the source of the signals as emanating from Cuba. Cuba has a long history of interfering with broadcast signals, with several commercial radio stations in Florida allowed to operate at higher than normal power levels to combat jamming. But these new mystery signals appeared to be intentionally targeting amateur radio transmissions. A few hours after the protest broke out on the 11th, ham Alex Valladares (W7HU) says he was speaking with a Cuban operator on 7.130 megahertz in the 40-meter band, when their conversation was suddenly overwhelmed with interference. “We moved to 7170, and they jam the frequency there,” he says. Valladares gave up for the night, but the following morning, he says, “I realize that they didn't turn off those jammers. [Then] we went to [7]140 the next day and they put jamming in there.” Valladres explains he escaped from Cuba to the United States in a raft in 2005. Like many hams in the large Cuban-American community in Florida, he frequently talks with operators in Cuba, and now he says the government there is “jamming the signal to prevent the Cuban people who listen to us and to prevent them from talking between them[selves].” Valladres has also heard reports that VHF 2-meter band repeaters have been shut down in Cuba. Two-meter band radios are typically low-power handheld walkie-talkies used for short-range communications. Their range is often extended by using fixed relay repeaters, which retransmit an incoming signal using a more powerful transmitter and a well-placed antenna. Because Florida and Cuba are so close—only about 175 kilometers separates them at their closest point—it’s possible for 2-meter communications to cross the distance. “It was possible to go between Miami and Havana … with an external antenna you can talk to Havana easy because it’s not that far, it’s like 230 miles away,” says Valladres. The short distance between southern Florida (with its large Cuban American population) and Cuba may also be one reason why the 40-meter band is plagued with interference while other shortwave bands used for long range ham radio, such as the 20-meter band, have been left untouched. Shortwave signals travel long distances by bouncing between the Earth and layers in the ionosphere high above. Signals in different bands are reflected by different layers, and these layers change depending on whether it is day or night. (Dealing with the vagaries of ionospheric propagation is one of the fun challenges of ham radio as a hobby.) The height of these layers also effectively sets a minimum range for a band, known as the skip distance. For the 20-meter band, the skip distance varies between 700 and 1600 kilometers in length, making it unsuitable for communications between southern Florida and Cuba. During daylight hours, however, the skip distance of the 40-meter band allows much shorter range communications. “The 20-meter band…doesn’t allow you to work with Havana,” says Valladres, “But the 40-meter band… it is possible to work and talk all day long.” Another factor is that 40-meter transceivers are relatively simple to build and maintain. “In Cuba, there are a lot of people that have homemade radios. And those homemade radios are [for the] 40-meter band, which is the easiest band to make a homemade radio,” says Valladres. Valladres alerted other hams to the interference, and soon operators were comparing notes on a forum on the QRZ.com website for hams. Hams across the southern United States and as far away as Minnesota and Rhode Island as well as Suriname in South America reported picking up the signals. The hams soon turned to nailing down the source of the signal. In previous decades, locating the source of a distant shortwave signal would have required special direction finding equipment, but modern hams have an ace up their sleeves in the form of the public KiwiSDR network. A KiwiSDR is a software defined radio board that attaches to a BeagleBoard computer running Linux. It can receive signals from 10 kilohertz to 30 MHz. Network-enabled, anyone with a web browser can access a public KiwiSDR station and tune in to whatever it is receiving. Crucially, the KiwiSDR software allows users to sample multiple stations around the world simultaneously. If at least three stations can hear a given signal, a TDoA (time difference of arrival) algorithm can estimate the origin of the signal. Josh Nass (KI6NAZ), who is based in California and hosts the Ham Radio Crash Course channel on YouTube, was one of the first to use the network to locate the source. The weekend when the Cuban protests started, he says, “I noticed the interference covering a lot of the [40-meter] band. Then I had individuals reach out to me, Cuban-Americans living around the country but a lot of them out of Miami, and they said ‘we think there’s a coordinated jamming effort going on…’” This made him turn to the KiwiSDR network and the TDoA algorithm. “Sure enough, in parallel a lot of my friends were also doing the same, and we largely had the same [result], that it looks like the signals are coming from the eastern side of Cuba.” As of this writing, the interference can still be heard via the KiwiSDR network: you can easily use a map interface to pick a station in Florida, such as the one operated by W1NEJ in Boca Raton, and listen in and then try your hand at locating the source. If you’re new to ham radio or KiwiSDR, here are a few pointers: Set the frequency to 7106.5 to start with in the control box, and make sure to click the button for “LSB” to make sure you’re getting the right kind of demodulation to best hear the signal, which sounds like the unfortunate offspring of a frog and a Dalek. You can then use the “extension” drop down menu to select the “TDoA” option, which will give you a map of other stations you can combine with Boca Raton to localize the source. “What you try to do is get a good sampling of SDRs that can all receive the signal that you want to detect, and that’s the tricky part of it, because you have to log in all these SDRs, go to the signal you are looking to triangulate, and make sure they can all hear it” before using the TDoA algorithm, advises Nass. Alain Arocha (K4KKC) is a Florida-based ham who noticed the interference early and who has also been active on the QRZ forum. He agrees that the KiwiSDR software can give misleading results if not used correctly, so he went the extra mile and verified the ability of the KiwiSDR stations that he was using for his location hunting by transmitting his own test signal on another frequency and making sure the SDRs could receive it. Arocha says he’s been disappointed by the lack of response from some hams, who view the interfering signals as curiosity as they can deal with it by simply shifting to other bands or using digital modes unavailable to Cuban operators with their more basic equipment. “The response has been real strong from the people who talk to Cuba, but some people couldn’t care less [as long it doesn’t affect them], but this is making that part of the spectrum unusable,” says Arocha, saying he’s seen operators from Germany and Spain report the interference. “It gives me a sense of frustration for people not to give a damn about this.” In particular, Arocha is annoyed that the American Radio Relay League (ARRL) is not being vocal about the issue. “We are aware of reports from radio amateurs of non-amateur signals observed in the amateur bands, and mostly likely originating in the direction of Cuba,” says the ARRL’s Bob Inderblizten (NQ1R). However, because American hams have access to so much spectrum and so the overall impact is limited in the United States, and because the ARRL is a national rather than international organization, “There’s no real role for ARRL” in this situation, says Inderblizten. However, he adds, “there’s a mechanism for amateurs to report intruders on the amateur band through the network of amateur network societies, as organized within the International Amateur Radio Union.” The IARU relies on national governments to enforce regulations, so if the interference is indeed being generated by the Cuban government, any notification is likely to have little effect. More pointed complaints would have to come from the US government, “and such bold and publicly reported interference is most certainly known by FCC and other government agencies,” says Inderblizten.
  • Solar-to-Hydrogen Water Splitter Outlasts Next Best Tech By 14x
    Jul 21, 2021 08:03 AM PDT
    To split water into hydrogen on a large scale, we need technologies that are sustainable, efficient, scalable and durable. Using solar energy (or other renewable energy sources) to split water delivers sustainability, while recent research has made key inroads toward efficiency and scalability. Now Japanese researchers say they’ve made an important step toward durability. Hydrogen today comes primarily from natural gas, which pumps out a lot of carbon and methane pollution into the atmosphere. By contrast, the sustainable solar-to-hydrogen approach has concentrated on photoelectrochemical (PEC) water splitting. In PEC systems, which nominally generate no greenhouse gases, special catalyst materials absorb sunlight to directly split water into hydrogen and oxygen. But these devices have also been limited by low efficiencies and lifetime. While previous PEC technologies have typically only lasted about a week, the new system is dramatically longer-lived. “We confirmed 100 days durability, which is one of the longest periods among experimentally confirmed PEC water splitting materials,” says Masashi Kato, a professor of electrical and mechanical engineering at Nagoya Institute of Technology. Durability will be key for maintenance-free systems that can be installed at remote locations, he says. Green hydrogen research and technologies have been gaining momentum around the world. Several companies and initiatives are making it by using wind or solar electricity to split water via electrolysis. Direct solar water-splitting using PEC is a more elegant, one-step way to harness solar energy for hydrogen production. But it has proven challenging to do on a large scale. Devices aren’t cheap, efficient, or durable enough yet to move out of the lab. Photocatalysts do the heavy lifting in PEC devices. Kato and his colleagues designed a tandem PEC device that uses two electrodes each coated with a different catalyst. One is titanium dioxide, a material commonly used in white paint and sunscreen, and the other is a cubic silicon carbide that Kato’s team has developed and reported previously. The two catalysts absorb different parts of the light spectrum and work in a complementary way to split water. Titanium dioxide is an n-type photocatalyst, which soaks up ultraviolet light and generates electrons, triggering chemical reactions that produce oxygen. And the silicon carbide material the researchers have made is a p-type catalyst that absorbs visible light to produce hydrogen. Together, the two reactions sustain each other for a while to split water into hydrogen and oxygen when a voltage is applied across the device placed in water. This results in the five-fold longevity boost over previous technologies to achieve 100-day operation, Kato says. The efficiency of the system reported in the journal Solar Energy Materials and Solar Cells is relatively low at 0.74 percent. Most solar-to-hydrogen technologies have achieved efficiencies in the 1-2 percent range, but some research teams have achieved substantially higher efficiencies. Researchers from Italy and Israel recently reported a method that harnesses semiconductor nanorods topped with platinum spheres that convert almost 4 percent of solar energy into hydrogen fuel. A Belgian research team at KU Leuven in 2019 reported a solar panel prototype that absorbs moisture from the air and splits it into hydrogen and oxygen with 15 percent efficiency. According to the U.S. Department of Energy, 5-10 percent efficiency should be enough for a practical solar hydrogen system. Kato says that it’s the titanium dioxide electrode that limits the efficiency of the system, and the team is now looking for other photocatalysts to boost efficiency that would still work in concert with the silicon carbide electrode. However, the combination of durability and efficiency still sets their device apart, he says.
  • Cloud Computing’s Coming Energy Crisis
    Jul 21, 2021 08:00 AM PDT
    How much of our computing now happens in the cloud? A lot. Providers of public cloud services alone take in more than a quarter of a trillion U.S. dollars a year. That's why Amazon, Google, and Microsoft maintain massive data centers all around the world. Apple and Facebook, too, run similar facilities, all stuffed with high-core-count CPUs, sporting terabytes of RAM and petabytes of storage. These machines do the heavy lifting to support what's been called “surveillance capitalism": the endless tracking, user profiling, and algorithmic targeting used to distribute advertising. All that computing rakes in a lot of dollars, of course, but it also consumes a lot of watts: Bloomberg recently estimated that about 1 percent of the world's electricity goes to cloud computing. That figure is poised to grow exponentially over the next decade. Bloomberg reckons that, globally, we might exit the 2020s needing as much as 8 percent of all electricity to power the future cloud. That might seem like a massive jump, but it's probably a conservative estimate. After all, by 2030, with hundreds of millions of augmented-reality spectacles streaming real-time video into the cloud, and with the widespread adoption of smart digital currencies seamlessly blending money with code, the cloud will provide the foundation for nearly every financial transaction and user interaction with data. How much energy can we dedicate to all this computing? In an earlier time, we could have relied on Moore's Law to keep the power budget in check as we scaled up our computing resources. But now, as we wring out the last bits of efficiency from the final few process nodes before we reach atomic-scale devices, those improvements will hit physical limits. It won't be long until computing and power consumption will once again be strongly coupled—as they were 60 years ago, before integrated CPUs changed the game. We can't devote the whole of the planet's electricity generation to support the cloud. Something will have to give. We seem to be hurtling toward a brick wall, as the rising demand for computing collides with decreasing efficiencies. We can't devote the whole of the planet's electricity generation to support the cloud. Something will have to give. The most immediate solutions will involve processing more data at the edge, before it goes into the cloud. But that only shifts the burden, buying time for rethinking how to manage our computing in the face of limited power resources. Software and hardware engineering will no doubt reorient their design practices around power efficiency. More code will find its way into custom silicon. And that code will find more reasons to run infrequently, asynchronously, and as minimally as possible. All of that will help, but as software progressively eats more of the world—to borrow a now-famous metaphor—we will confront this challenge in ever-wider realms. We can already spy one face of this future in the nearly demonic coupling of energy consumption and private profit that provides the proof-of-work mechanism for cryptocurrencies like Bitcoin. Companies like Square have announced investments in solar energy for Bitcoin mining, hoping to deflect some of the bad press associated with this activity. But more than public relations is at stake. Bitcoin asks us right now to pit the profit motive against the health of the planet. More and more computing activities will do the same in the future. Let's hope we never get to a point where the fate of the Earth hinges on the fate of the transistor. This article appears in the August 2021 print issue as “Cloud Computing's Dark Cloud."
  • How to Keep the Automotive Chip Shortage From Happening Again
    Jul 21, 2021 06:00 AM PDT
    “The automotive supply chain is a very complicated animal,” said Bob O’Donnell president of TECHnalysis Research at an automotive technology panel held Monday at GlobalFoundries Fab 8 in Malta, N.Y. “And very few people understand it.” O’Donnell made this observation as part of a discussion involving executives from the auto and chip industries. The panelists portrayed a supply chain whose shortcomings have recently brought car makers to their knees. The panelists—who consisted of executives from chip manufacturer GlobalFoundries, IC maker Analog Devices, system integrator Aptiv, and automaker Ford—all agreed that this must never happen again. Meanwhile, the semiconductor content in cars is growing at an unprecedented rate—and those semiconductors are being integrated into new architectures driven by the change to electric vehicles. “We have to revisit risk management across the board,” said Jonathan Jennings, vice president of global commodities purchasing and supplier technical assistance at Ford. He explained that the industry thought it had been covering itself against risks by using multiple suppliers. However, they did not realize that those suppliers or the suppliers of those suppliers were all using the output of the same small set of semiconductor foundries. Kevin P. Clark, president and CEO of Aptiv, which as a Tier 1 supplier builds electronics systems for automakers, presented a sense of the scale of his company’s part of the supply chain, saying, “We receive 220 million parts from 400 suppliers daily. Of which we produce more than 90 million components shipped to 7000 to 8000 customers daily.” Car makers typically deal closely with their Tier 1 suppliers, and Jennings said people in his position rarely met with chip manufacturers directly. “But we have now,” he said. The suppliers agreed that they need deeper relationships with the car makers. “What it requires is strategic relationships all the way down the chain,” said Aptiv’s Clark. It will take, he continued, “co-investment not just from a dollars standpoint, but from a relationship standpoint.” What might that mean for chip manufacturers like GlobalFoundries? According to GlobalFoundries senior vice president Mike Hogan, car maker involvement could lead to faster introduction of new chip technologies. For example, the first version of new tech could be designed to meet auto industry standards rather than today’s model, where tech developed for other industries are adapted to car makers’ needs. This reimagining of the supply chain is happening as the car industry confronts big changes. “If you look at where we’re going from a technology standpoint, we will advance more in the next ten years than we will have in the last hundred,” Jennings said. The move to battery electric vehicles presents a major chance to simplify the way the electronic systems in vehicles are designed. With existing internal combustion cars, those electronics have been layered on as new technologies were developed and deployed leading to a lot of complexity in both hardware and software, explains Hogan. (For a deep dive into just how complex the software situation has gotten, read “How Software Is Eating the Car.”) Battery electric redesigns offer “a real opportunity to rethink how a vehicle is architected,” said Aptiv’s Clark. But for the supply chain to work efficiently, he thinks suppliers need to participate in that rearchitecting. How long will it take before this dream supply chain emerges? It will likely be the work of years, executives say.
  • From Spacecraft to Sensor Fusion
    Jul 20, 2021 12:00 PM PDT
    It's easy to set up multiple video feeds in all kinds of locations. Merging those feeds together, combining them with other information, and putting them into context is a tougher challenge, especially when your customers include first responders and intelligence agencies. But that's the job that Cubic Corp. hired Iverson Bell III to do in late 2020, when they chose him “to lead the team responsible for transforming the company's Unified Video project into a more full-featured video/communications platform, including AI/machine learning and support for distributed sensors and more data types," says Bell. Bell had previously been at Northrop Grumman, doing spacecraft design and testing, and had researched electrodynamic tethers, including for use with CubeSats, as part of his postdoc work with Brian Gilchrest at the University of Michigan, where Bell earned his master's and Ph.D. Moving to his role at Cubic's Hanover, Md., location might seem like a big jump between two very different specialties, but Bell was able to leverage common skills and other experiences in dealing with complex data handling and processing. For example, Bell created sophisticated data-analysis models during an internship at the Johns Hopkins University Applied Physics Laboratory; performed software testing along with electrical-system integration and testing for the James Webb Space Telescope program at Northrop Grumman; and has been an agile lead manager on other software development. The path to his current job and project, according to Iverson, reflects a mix of exposure, opportunities, and risk-taking—both by him and by people in charge of him—plus, of course, a lot of hard work. “My undergraduate engineering focus at Howard [University, in Washington, D.C.,] was applied electromagnetics, like antennas and waveguides—so there was lots of math involved. And I was also interested in signal processing, like how antennas convert data into a digital signal. Then, in grad school at the University of Michigan, my concentration was electromagnetics, and I got a lot of exposure to remote sensors, radars, and other cool stuff in more depth. “I chose good topics by pure luck," says Bell. Bell attributes his interest in science and engineering to early exposure through books, other media, and family. “I read books like Marshall Brain's How Stuff Works series, and watched the Discovery Channel," he recalls. “My mother is a pediatrician. And she was a chemistry undergrad, and cooked—and cooking was chemistry. And my older sister has been a role model—she's a civil engineer—and I was exposed to her taking classes. When I got to high school and was liking math and science, she asked me, 'What do you want to major in?'" “Be ready to take risks and learn as you go," Bell advises. “For example, when I was part of the group conducting electrical tests on the Webb telescope, we were working with the mechanical engineering and test team, which I hadn't been exposed to previously. It was something to learn. “It helps when folks are willing to take a chance on you—so if you're in a position to employ someone, take that chance on them…. I have a passion for mentorship," says Bell. “I want to help improve education for younger people, expose them to STEM topics.... But my interest is shifting from individual mentoring to wanting to help address the larger policy and curriculum aspects of the problem." Bell is enjoying the shift from the process of building spacecraft—“where the process takes time and you often get only one chance for things to work"—to delivering a real-time service. It's “very different to see teams working on high-end leading-edge engineering at a very fast pace," he says. “It's like changing the plane's engine while you're flying it."
  • 12 Robotics Teams Will Hunt For (Virtual) Subterranean Artifacts
    Jul 20, 2021 10:30 AM PDT
    Last week, DARPA announced the twelve teams who will be competing in the Virtual Track of the DARPA Subterranean Challenge Finals, scheduled to take place in September in Louisville, KY. The robots and the environment may be virtual, but the prize money is very real, with $1.5 million of DARPA cash on the table for the teams who are able to find the most subterranean artifacts in the shortest amount of time. You can check out the list of Virtual Track competitors here, but we’ll be paying particularly close attention to Team Coordinated Robotics and Team BARCS, who have been trading first and second place back and forth across the three previous competitions. But there are many other strong contenders, and since nearly a year will have passed between the Final and the previous Cave Circuit, there’s been plenty of time for all teams to have developed creative new ideas and improvements. As a quick reminder, the SubT Final will include elements of tunnels, caves, and the urban underground. As before, teams will be using simulated models of real robots to explore the environment looking for artifacts (like injured survivors, cell phones, backpacks, and even hazardous gas), and they’ll have to manage things like austere navigation, degraded sensing and communication, dynamic obstacles, and rough terrain. While we’re not sure exactly what the Virtual Track is going to look like, one of the exciting aspects of a virtual competition like this is how DARPA is not constrained by things like available physical space or funding. They could make a virtual course that incorporates the inside of the Egyptian pyramids, the Cheyenne Mountain military complex, and my basement, if they were so inclined. We are expecting a combination of the overall themes of the three previous virtual courses (tunnel, cave, and urban), but connected up somehow, and likely with a few surprises thrown in for good measure. To some extent, the Virtual Track represents the best case scenario for SubT robots, in the sense that fewer things will just spontaneously go wrong. This is something of a compromise, since things very often spontaneously go wrong when you’re dealing with real robots in the real world. This is not to diminish the challenges of the Virtual Track in the least—even the virtual robots aren’t invincible, and their software will need to keep them from running into simulated walls or falling down simulated stairs. But as far as I know, the virtual robots will not experience damage during transport to the event, electronics shorting, motors burning out, emergency stop buttons being accidentally pressed, and that sort of thing. If anything, this makes the Virtual Track more exciting to watch, because you’re seeing teams of virtual robots on their absolute best behavior challenging each other primarily on the cleverness and efficiency of their programmers. The other reason that the Virtual Track is more exciting is that unlike the Systems Track, there are no humans in the loop at all. Teams submit their software to DARPA, and then sit back and relax (or not) and watch their robots compete all by themselves in real time. This is a hugely ambitious way to do things, because a single human even a little bit in the loop can provide the kind of critical contextual world knowledge and mission perspective that robots often lack. A human in there somewhere is fine in the near to medium term, but full autonomy is the dream. As for the Systems Track (which involves real robots on the physical course in Louisville), we’re not yet sure who all of the final competitors will be. The pandemic has made travel complicated, and some international teams aren’t yet sure whether they’ll be able to make it. Either way, we’ll be there at the end of September, when we’ll be able to watch both the Systems and Virtual Track teams compete for the SubT Final championship.
  • Ignoring Intel Rumors, GlobalFoundries Will Do $1-billion Expansion
    Jul 20, 2021 08:27 AM PDT
    It used to be rare that the semiconductor industry made the news, as Thomas Caulfield, CEO of GlobalFoundries, said on Monday. Caulfield was addressing an assembly of industry partners and political dignitaries including Senator Charles Schumer of New York and Commerce Secretary Gina Raimondo. “Every day now we’re in the news, not about what we did but what we haven’t been able to do enough of,” he said. “Our manufacturing capacity, worldwide, has been outpaced by demand.” Hoping to cash in on remedying that situation, GlobalFoundries announced a $1-billion investment aimed at adding 150,000 wafers per year of capacity on Fab 8 in Malta, N.Y., at the company’s most advanced facility. Caulfield also announced that GlobalFoundries would build a new fab in Malta. However, he provided no details about that fab. The truth is that GlobalFoundries has been in the news quite a lot lately, but not for any reason Caulfield would talk about. Intel is rumored to be interested in purchasing GlobalFoundries for US $30 billion, according to the Wall Street Journal. Caulfield would not acknowledge the rumors. Such a tie-up would be a good fit for Intel, analysts say, because the company is in the process of rebooting its foundry business. GlobalFoundries would provide both ready-made capacity and, perhaps more importantly, know-how to get that business off the ground. The Fab 8 expansion would generate 1000 jobs at GlobalFoundries and thousands more in the local economy, Caulfield said. The $1-billion expansion comes on top of a global $1.4 billion expansion and a new fab in Singapore that will add 450,000 wafers per year. Sen. Charles Schumer (left), GlobalFoundries CEO Thomas Caulfield, and Commerce Secretary Gina Raimondo. Credit: GlobalFoundries Fab 8 has had its ups and downs. In 2018, the company installed two extreme ultraviolet lithography machines in a drive to produce 7-nanometer chips, then the industry’s most advanced. However, company executives calculated that they would never be profitable if they continued chasing Moore’s Law and abandoned the project. Instead, the company has focused on adding technology features to their existing processes, such as embedded MRAM memory and advanced RF capabilities. IBM is now suing GlobalFoundries, because it says the latter company was obligated to produce IBM chips at 10-nanometers or better, a process technology between 7-nanometers and the 12-nm that GlobalFoundries currently operates at Fab 8. The lack of detail about the new fab in Malta, N.Y. may be in part because there is likely a considerable amount of federal government money at stake. Schumer scored a rare bipartisan win pushing the United States Innovation and Competition Act of 2021 through the U.S. Senate. The bill includes $52 billion for semiconductor manufacturing and R&D, but it must pass in the House of Representatives before it’s signed into law. And after that, the Commerce Department must figure out how to dispense the funds. The $52 billion seems small compared to other programs in the works in South Korea and Europe, which could be in the hundreds of billions of dollars over a decade. Raimondo described the U.S. plan as “a historic, once in a generation investment.” But she also acknowledged that $52-billion won’t be enough. “It’s just the tip of the spear.”
  • COVID-19 Forced Us All to Experiment. What Have We Learned?
    Jul 20, 2021 08:00 AM PDT
    LIFE IS A HARD SCHOOL: First it gives us the test and only then the lesson. Indeed, throughout history humanity has learned much from disasters, wars, financial ruin—and pandemics. A scholarly literature has documented this process in fields as diverse as engineering, risk reduction, management, and urban studies. And it's already clear that the COVID-19 pandemic has sped up the arrival of the future along several dimensions. Remote working has become the new status quo in many sectors. Teaching, medical consulting, and court cases are expected to stay partly online. Delivery of goods to the consumer's door has supplanted many a retail storefront, and there are early signs that such deliveries will increasingly be conducted by autonomous vehicles. On top of the damage it has wreaked on human lives, the pandemic has brought increased costs to individuals and businesses alike. At the same time, however, we can already measure solid improvements in productivity and innovation: Since February 2020, some 60 percent of firms in the United Kingdom and in Spain have adopted new digital technologies, and 40 percent of U.K. firms have invested in new digital capabilities. New businesses came into being at a faster rate in the United States than in previous years. We propose to build on this foundation and find a way to learn not just from crises but even during the crisis itself. We argue for this position not just in the context of the COVID-19 pandemic but also toward the ultimate goal of improving our ability to handle things we can't foresee—that is, to become more resilient. To find the upside of emergencies, we first looked at the economic effects of a tidy little crisis, a two-day strike that partially disrupted service of the London Underground in 2014. We discovered that the approximately 5 percent of the commuters who were forced to reconsider their commute ended up finding better routes, which these people continued after service was restored. In terms of travel time, the strike produced a net benefit to the system because the one-off time costs of the strike were less than the enduring benefits for this minority of commuters. Why had commuters not done their homework beforehand, finding the optimal route without pressure? After all, their search costs would have been quite low, but the benefits from permanently improving their commute might well have been large. Here, the answer seems to be that commuters were stuck in established yet inefficient habits; they needed a shock to prod them into making their discovery. Icelandic volcano eruption of 1973. Photo-Illustration: Chad Hagen; Original Photo: Bettmann/Getty Images A similar effect followed the eruption of a long-dormant Icelandic volcano in 1973. For younger people, having their house destroyed led to an increase of 3.6 years of education and an 83 percent increase in lifetime earnings, due to their increased probability of migrating away from their destroyed town. The shock helped them overcome a situation of being stuck in a location with a limited set of potential occupations, to which they may not have been well suited. As economists and social scientists, we draw two fundamental insights from these examples of forced experimentation. First, the costs and benefits of a significant disruption are unlikely to fall equally on all those affected, not least at the generational level. Second, to ensure that better ways of doing things are discovered, we need policies to help the experiment's likely losers get a share of the benefits. Because large shocks are rare, research on their consequences tends to draw from history. For example, economic historians have argued that the Black Death plague may have contributed to the destruction of the feudal system in Western Europe by increasing the bargaining power of laborers, who were more in demand. The Great Fire of London in 1666 cleared the way, literally, for major building and planning reforms, including the prohibition of new wooden buildings, the construction of wider roads and better sewers, and the invention of fire insurance. History also illustrates that good data is often a prerequisite for learning from a crisis. John Snow's 1854 Broad Street map of cholera contagion in London was not only instrumental in identifying lessons learned—the most important being that cholera was transmitted via the water supply—but also in improving policymaking during the crisis. He convinced the authorities to remove the handle from the pump of a particular water source that had been implicated in the spread of the disease, thereby halting that spread. Four distinct channels lead to the benefits that may come during a disruption to our normal lives. China enters world markets as major exporter of industrial products. Photo-Illustration: Chad Hagen; Original Photo: Tao Images/Alamy Habit disruption occurs when a shock forces agents to reconsider their behavior, so that at least some of them can discover better alternatives. London commuters found better routes, and Icelandic young people got more schooling and found better places to live. Selection involves the destruction of weaker firms so that only the more productive ones survive. Resources then move from the weaker to stronger entities, and average productivity increases. For example, when China entered world markets as a major exporter of industrial products, production from less productive firms in Mexico was reduced or ceased altogether, thus diverting resources to more productive uses. Weakening of inertia occurs when a shock frees a system from the grip of forces that have until now kept it in stasis. This model of a system that's stuck is sometimes called path dependence, as it involves a way of doing things that evolved along a particular path, under the influence of economic or technological factors. The classic example of path dependence is the establishment of the conventional QWERTY keyboard standard on typewriters in the late 19th century and computers thereafter. All people learn how to type on existing keyboards, so even a superior keyboard design can never gain a foothold. Another example is cities that persist in their original sites even though the economic reasons for founding them there no longer apply. Many towns and cities founded in France during the Roman Empire remain right where the Romans left them, even though the Romans made little use of navigable rivers and the coastal trade north of the Mediterranean that became important in later centuries. These cities have been held in place by the man-made and social structures that grew up around them, such as aqueducts and dioceses. In Britain, however, the nearly complete collapse of urban life after the departure of the Roman legions allowed that country to build new cities in places better suited to medieval trade. Coordination can play a role when a shock resets a playing field to such an extent that a system governed by opposing forces can settle at a new equilibrium point. Before the Great Boston Fire of 1872, the value of much real estate had been held down by the presence of crumbling buildings nearby. After the fire, many buildings were reconstructed simultaneously, encouraging investment on neighboring lots. Some economists argue that the fire created more wealth than it destroyed. A shock may free a system from path dependence—the grip of forces that have until now kept it in stasis The ongoing pandemic has set off a scramble among economists to access and analyze data. Although some people have considered this unseemly, even opportunistic, we social scientists can't run placebo-controlled experiments to see how a change in one thing affects another, and so we must exploit for this purpose any shock to a system that comes our way. What really matters is that the necessary data be gathered and preserved long enough for us to run it through our models, once those models are ready. We ourselves had to scramble to secure data regarding commuting behavior following the London metro strike; normally, such data gets destroyed after 8 weeks. In our case, thanks to Transport for London, we managed to get it anonymized and released for analysis. In recent years, there has been growing concern over the use of data and the potential for “data pollution," where an abundance of data storage and its subsequent use or misuse might work against the public interest. Examples include the use of Facebook's data around the 2016 U.S. presidential election, the way that online sellers use location data to discriminate on price, and how data from Strava's fitness app accidentally revealed the sites of U.S. military bases. Given such concerns, many countries have introduced more stringent data-protection legislation, such as the EU General Data Protection Regulation (GDPR). Since this legislation was introduced, a number of companies have faced heavy fines, including British Airways, which in 2018 was fined £183 million for poor security arrangements following a cyberattack. Most organizations delete data after a certain period. Nevertheless, Article 89 of the GDPR allows them to retain data “for scientific or historical research purposes or statistical purposes" in “the public interest." We argue that data-retention policies should take into account the higher value of data gathered during the current pandemic. The presence of detailed data is already paying off in the effort to contain the COVID-19 pandemic. Consider the Gauteng City-Region Observatory in Johannesburg, which in March 2020 began to provide governmental authorities at every level with baseline information on the 12-million-strong urban region. The observatory did so fast enough to allow for crucial learning while the crisis was still unfolding. The Great Boston Fire of 1872. Photo-Illustration: Chad Hagen; Original Photo: Universal Images Group/Getty Images The observatory's data had been gathered during its annual “quality of life" survey, now in its 10th year of operation, allowing it to quantify the risks involved in household crowding, shared sanitation facilities, and other circumstances. This information has been cross-indexed with broader health-vulnerability factors, like access to electronic communication, health care, and public transport, as well as with data on preexisting health conditions, such as the incidence of asthma, heart disease, and diabetes. This type of baseline management, or “baselining," approach could give these data systems more resilience when faced with the next crisis, whatever it may be—another pandemic, a different natural disaster, or an unexpected major infrastructural fault. For instance, the University of Melbourne conducted on-the-spot modeling of how the pandemic began to unfold during the 2020 lockdowns in Australia, which helped state decision-makers suppress the virus in real time. When we do find innovations through forced experimentation, how likely are those innovations to be adopted? People may well revert to old habits, and anyone who might reasonably expect to lose because of the change will certainly resist it. One might wonder whether many businesses that thrived while their employees worked off-site might nonetheless insist on people returning to the central office, where managers can be seen to manage, and thereby retain their jobs. We can also expect that those who own assets few people will want to use anymore will argue for government regulations to support those assets. Examples include public transport infrastructure—say, the subways of New York City—and retail and office space. One of the most famous examples of resistance to technological advancements is the Luddites, a group of skilled weavers and artisans in early 19th-century England who led a six-year rebellion smashing mechanized looms. They rightly feared a large drop in their wages and their own obsolescence. It took 12,000 troops to suppress the Luddites, but their example was followed by other “machine breaking" rebellions, riots, and strikes throughout much of England's industrial revolution. Resistance to change can also come from the highest levels. One explanation for the low levels of economic development in Russia and Austria-Hungary during the 19th century was the ruling class's resistance to new technology and to institutional reform. It was not that the leaders weren't aware of the economic benefits of such measures, but rather that they feared losing a grip on power and were content to retain a large share of a small pie. The conventional QWERTY keyboard. Photo-Illustration: Chad Hagen; Original Photo: Jonathan Weiss/Alamy Clearly, it's important to account for the effects that any innovation has on those who stand to lose from it. One way to do so is to commit to sharing any gains broadly, so that no one loses. Such a plan can disarm opposition before it arises. One example where this strategy has been successfully employed is the Montreal Protocol on Substances That Deplete the Ozone Layer. It included a number of measures to share the gains from rules that preserve the ozone layer, including payments to compensate those countries without readily available substitutes who would otherwise have suffered losses. The Montreal Protocol and its successor treaties have been highly effective in meeting their environmental objectives. COVID-19 winners and losers are already apparent. In 2020, economic analysis of social distancing in the United States showed that as many as 1.7 million lives might be saved by this practice. However, it was also found that about 90 percent of the life-years saved would have accrued to people older than 50. Furthermore, it is not unreasonable to expect that younger individuals should bear an equal (or perhaps greater) share of the costs of distancing and lockdowns. It seems wise to compensate younger people for complying with the rules on social distancing, both for reasons of fairness and to discourage civil disobedience. We know from stock prices and spending data that some sectors and firms have suffered disproportionately during the pandemic, especially those holding stranded assets that must be written off, such as shopping malls, many of which have lost much of their business, perhaps permanently. We can expect similar outcomes for human capital. There are ways to compensate these parties also, such as cash transfers linked to retraining or reinvestment. There will almost certainly be winners and losers as a result of the multitude of forced experiments occurring in workplaces. Some people can more easily adapt to new technologies, some are better suited to working from home or in new settings, and some businesses will benefit from less physical interaction and more online communication. Consider that the push toward online learning that the pandemic has provided may cost some schools their entire business: Why would students wish to listen to online lectures from their own professors when they could instead be listening to the superstars of their field? Such changes could deliver large productivity payoffs, but they will certainly have distributional consequences, likely benefitting the established universities, whose online platforms may now cater to a bigger market. We know from the history of the Black Death that if they're big enough, shocks have the power to bend or even break institutions. Thus, if we want them to survive, we need to ensure that our institutions are flexible. To manage the transition to a world with more resilient institutions, we need high-quality data, of all types and from various sources, including measures of individual human productivity, education, innovation, health, and well-being. There seems little doubt that pandemic-era data, even when it's of the most ordinary sort, will remain more valuable to society than that gathered in normal times. If we can learn the lessons of COVID-19, we will emerge from the challenge more resilient and better prepared for whatever may come next. Editor's note: The views expressed are the authors' own and should not be attributed to the International Monetary Fund, its executive board, or its management. This article appears in the August 2021 print issue as “What We Learned From the Pandemic."
  • How to Prevent a Power Outage From Becoming a Crisis
    Jul 20, 2021 07:00 AM PDT
    On 4 August 2020, a tropical storm knocked out power in many parts of New York City as well as neighboring counties and states. The electricity utility, Consolidated Edison, was able to fully restore service in Manhattan within a few hours. Meanwhile, in the surrounding boroughs of the Bronx, Brooklyn, Queens, and Staten Island, thousands of customers remained without electricity for days. There are technical reasons that contributed to faster repairs in Manhattan, but in general the neighborhoods that waited the longest to have their power restored tended to be poorer and less white. For most people, a power outage is an inconvenience. But for some, it is an emergency that can quickly turn deadly, especially when the outage occurs during a heat wave or a winter freeze. Extended exposure to temperatures above 32° C can quickly cause health crises, especially in the elderly, children, and people with heart disease, poor blood circulation, and other pre-existing conditions. The recent record-breaking heat in Oregon and Washington state, for example, claimed more than 200 lives. Extreme cold can have similarly dire consequences, as we saw during February’s massive power outage in Texas. Public health experts refer to those who are most at risk during power outages as “electricity vulnerable” or “electricity dependent.” In the United States, hundreds of thousands of people are in that category. A 2017 study estimated that about 685,000 Americans who live at home and have medical insurance are electricity dependent; of that group, roughly one fifth are vulnerable to even short power outages of 3 to 4 hours. Normally during a heat wave, people have the option of escaping their homes and seeking cooler temperatures in public spaces like libraries, coffee shops, and stores. COVID-19 changed all that. The pandemic created a work-at-home paradigm that shifted electricity usage away from commercial buildings to residential neighborhoods, in ways that few expected and fewer planned for. It made finding relief from the heat logistically difficult. And it slowed urgent repair and maintenance of the power grid, with work crews having to practice social distancing due to the pandemic. Step 1: Identify outages in real time There’s a better way to do things. It requires that providers like New York City’s ConEd revise their priorities for repairs during outages. Instead of first serving areas with the greatest density of customers, as they do now, utilities would make repairs in those areas with a greater share of customers whose health is immediately endangered by the outage. This strategy would correct an endemic imbalance that puts greater stressors on less affluent neighborhoods and the electricity vulnerable. The existence of this imbalance isn’t just theoretical, as the storm last August demonstrated. The NYU Power Outage Dashboard helps visualize power outages in New York City. In the historic data shown here, the red, orange, yellow indicate populations that are more vulnerable to outages, while blue and green indicate groups that are less vulnerable. YURY DVORKIN/NYU To help implement this strategy, my group at New York University has been developing a Power Outage Dashboard for New York City. The dashboard, created with funding from the National Science Foundation, collects data from ConEd about power outages in the city and integrates that data with open-source socio-demographic and environmental data to evaluate the severity of each outage for electricity-vulnerable groups. Based on this evaluation, we compute a rank for each of New York City’s 300-plus zip codes that takes into account demographic information like household income, age, race, and gender, as well as public health data and the presence of low-income and senior housing; the Zip Code Rank also factors in dynamically changing environmental data, such as ambient temperature, humidity, and precipitation. From the Zip Code Rank, we can determine an Overall Severity Rank for the outages in each zip code, which can be used to prioritize repairs. To aggregate this data, we designed a crawler that collects real-time outage data from Con Edison; we also have archives of historical data on hundreds of thousands of past outages. The addresses, zip codes, and demographic information come from NYC Open Data, a comprehensive set of public databases published by New York City agencies and their partners. A composite algorithm that we developed ranks the outages by the relative vulnerability of the customers in the zip code. This data is superimposed on a real-time outage map of New York City and color-coded by vulnerability—red for most vulnerable, blue for least. The dashboard is designed to allow users, including the public, to know which outages should have higher priority. Even a cursory look at the dashboard shows that outages in Manhattan tend to be green or blue, while those in the outer boroughs tend to be yellow, orange, or red. For example, on 8 July 2021, there were 41 relatively large outages in New York City. Of these, 6 were in more affluent areas of Manhattan, and our algorithm coded most of them as blue. In Brooklyn, by contrast, there were 17 outages coded orange or red. This chart shows a history of power outages by zip code in New York City. The Brooklyn zip code 11204, for example, had 607 outages. Poor neighborhoods are more likely to experience outages than wealthier areas are. YURY DVORKIN/NYU This wasn’t a one-off. When we look at the historical data, we can see that residents in the outer boroughs are more likely to lose power, with a clear correlation between the number and duration of power outages and the ethnic and class makeup of neighborhoods. A poor neighborhood with a larger minority population in the Bronx is much more likely to suffer an extended power outage than is a wealthier, whiter neighborhood in lower Manhattan. There are a number of ways to explain this disparity. The outer boroughs have more overhead power lines compared to Manhattan, where the cables run underground, and overhead power lines are more prone to faults. Likewise, the residential buildings in the Bronx, Brooklyn, and Queens tend to be older or less well maintained, compared to the office buildings and luxury condos of lower Manhattan. However you explain it, though, there’s still an underlying problem of social injustice and societal inequality that is leaving vulnerable people in jeopardy and that must be corrected. We hope to offer the dashboard as an open-source framework for use by utilities. In the future we will be designing functions to help route service vehicles to where they’re needed, based on the availability of repair teams. Step 2: Prioritize repairs for the most vulnerable customers Beyond just knowing where outages are and which groups of customers are being affected, a utility also needs to be able to forecast demand—predicting how much electricity it will need to supply to customers in the coming hours and days. This is of particular importance now, when many people are suffering from the lingering effects of COVID-19—so-called “long COVID” patients. Some of them are likely homebound and are now counted among the ranks of the electricity vulnerable. This map shows real-time outages in New York City on 16 July 2021. YURY DVORKIN/NYU Demand forecasting tools rely on historic trends about electricity use. But in New York City, analyses showed that demand forecasting errors surged in the aftermath of the pandemic’s stay-at-home orders. That’s because the COVID-19 pandemic was a sui generis phenomenon for which there was no historic data. As consumption patterns shifted from commercial buildings to residential, the forecasting tools were rendered ineffective. Any plan that could significantly alter demand forecasting must be considered with the power grid in mind. Last summer, for example, the mayor of New York City, Bill De Blasio, invested $55 million in a heatwave plan that included installing more than 74,000 air-conditioning units for low-income senior citizens. Although these units are providing necessary relief to a vulnerable population, they also are raising electricity demand in residential areas and causing additional stress on ConEd’s distribution system. Now that many offices and businesses are reopening, it may be difficult or even impossible for utilities to predict exactly how electricity demand will change this summer and when, where, and what the actual demand peak will be. Just because a utility experiences reduced demand in one part of its system does not mean it will be able to accommodate increased demand in another part of the system. There are basic network limits on the ability to transfer electricity from one part of the system to another, such as voltage and power flow. Grid operators must therefore proactively analyze the impacts of shifting demand and the reduced accuracy of demand forecasting tools on their systems. And they must factor their electricity-vulnerable customers into their planning. Electricity infrastructure is a complex engineering system, and its reliability cannot be 100-percent guaranteed, despite the best efforts of engineers, managers, and planners. Hence, it is important for a utility to consider every possible contingency and plan for mitigation and corrective actions. Such planning should be transparent and open for public comment and evaluation by experts from leading academic institutions, government labs, professional organizations, and so on. Of the 685,000 people in the United States who are considered “electricity dependent,” about one fifth are vulnerable to power outages of just 3 to 4 hours. Some readers may find it odd to link the power grid to social justice, but when you look at historic patterns, it’s hard to ignore that certain groups in our society have been marginalized and underserved. Going forward, we must do a better job of protecting vulnerable populations. Utilities can engage with the local community by surveying customers about their electricity needs. Companies will then be in a good position to assist their most vulnerable customers as soon as any power outage is reported. Thankfully, New York City made it through last summer with relatively few heat crises. However, the pandemic didn’t end once the weather turned cool. Circumstances could be much worse this summer. The city needs a fundamental change and the tools to affect it, with repairs prioritized in such a way that the most vulnerable, not the most affluent, are serviced first. And ConEd and electricity providers like them need to begin planning now. About the Author Yury Dvorkin is an assistant professor of electrical and computer engineering at New York University’s Tandon School of Engineering and a faculty member of NYU’s Center for Urban Science and Progress.
  • New AI-Based Augmented Innovation Tool Promises to Transform Engineer Problem Solving
    Jul 19, 2021 01:00 PM PDT
    You're an engineer trying to work out a solution to a complicated problem. You have been at this problem for the last three days. You've been leveraging your expertise in innovative methods and other disciplined processes, but you still haven't gotten to where you need to be. Imagine if you could forego the last thirty hours of work, and instead you could have reached a novel solution in just 30 minutes. In addition to having saved yourself nearly a week of time, you would have not only arrived at a solution to your vexing engineering issue, but you also would have prepared all the necessary documentation to apply for intellectual property (IP) protection for it. This is now what's available from IP.com with its latest suite of workflow solutions dubbed IQ Ideas PlusTM. IQ Ideas Plus makes it easy for inventors to submit, refine, and collaborate on ideas that are then delivered to the IP team for review. This new workflow solution is built on IP.com's AI natural language processing engine, Semantic GistTM, which the company has been refining since 1994. The IQ Ideas Plus portfolio was introduced earlier this year in the U.S. and has started rolling out worldwide. “The great thing about Semantic Gist is that it is set up to do a true semantic search," explained Dr. William Fowlkes, VP Analytics and Workflow Solutions at IP.com and developer of the IQ Ideas Plus solution. “It works off of your description. It does not require you to use arcane codes to define subject matters, to use keywords, or rely on complex Boolean constructs to find the key technology that you're looking for." The program is leveraging AI to analyze your words. So, the description of your problem is turned into a query. The AI engine then analyzes that query for its technical content and then using essentially cosine-similarity-type techniques and vector math, it will search eight or nine million patents, from any field, that are similar to your problem. “Even patents that look like they're in a different field sometimes have some pieces, some key technology nuggets, that are actually similar to your problem and it will find those," added Fowlkes. In a typical session, you might spend 10 – 15 minutes describing your problem on the IQ Ideas Plus template, which includes root cause analysis, when you need to fix a specific problem, or system improvement analysis, when you are asked to develop the next big thing for an existing product. The template lists those elements that you need to include so that you describe all the relevant factors and how they work together. The template involves a graphical user interface (GUI) that starts by asking you to name your new analysis and to describe the type of analysis you'll be conducting: “Solve a Problem", or “Improve a System". After you've chosen to 'Solve a Problem', for example, you are given a drop-down menu that asks you what field this problem resides in, i.e., mechanical engineering, electrical engineering, etc. The next drop-down menu then asks what sub-group this field belongs to, i.e., aerospace. After you've chosen your fields, you write a fairly simple description of your problem and ask for a solution (How do I fix…?). You then press the button, and three to five seconds later, you're provided two lists – “Functional Concepts" and “Inventive Principles". One can think of the Functional Concepts list as a thorough catalogue of all the prior art in this area. What really distinguishes the IQ Ideas Plus process is the “Inventive Principles" list, which is abstractions from previous patents or patent applications. The semantic engine returns ordered lists with the most relevant results at the top. Of course, as you scroll down through the list, after the first 10 to 20, the results become less and less relevant. What will often happen is that as you work through both the “Functional Concepts" and “Inventive Principles" lists you begin to realize that you've omitted elements to your description, or that your description should go in a slightly different direction based on the results. While this represents a slightly iterative process, each iteration is just as fast as the first. In fact, it's faster because you no longer need to spend 10 minutes writing down your changes. All along the process, there's a workbook, similar to an electronic lab notebook, for you to jot down your ideas. As you jot down your ideas based on the recommendations from the AI, it will offer you the ability to run a concept evaluation, telling you whether the concept is “marginally acceptable" or “good", for example. You can use this concept evaluation tool to understand whether you have written your problem and solution in a way that it's unique or novel, or whether you should consider going back to the drawing board to keep iterating on it. When you get a high-scoring idea, the next module, called “Inventor's Aide," helps you write a very clear invention disclosure. In many organizations, drafting and submitting disclosures can be a pain point for engineers. Inventor's Aide makes the process fast and easy providing suggestions to make the language clear and concise. With the IQ Ideas Plus suite of tools, all of the paperwork (i.e., a list of related or similar patents, companies active in the field, a full technology landscape review, etc.) is included as attachments to your invention disclosure so that when it gets sent to the patent committee, they can look at the idea and know what art is already there and what technologies are in the space. They can then vet your idea, which has been delivered in a clear, concise manner with no jargon, so they understand the idea you have written. The cycle time between a patent review committee looking at your disclosure and you getting it back can sometimes take weeks. IQ Ideas Plus shortens the cycle time, drives efficiencies and reduces a lot of frustration on both ends of the equation. Moving more complete disclosures through the system improves the grant rate of the applications because the tool has helped document necessary legwork during the process. “IQ Ideas does a great job of both helping you to find novel solutions using the brainstorming modules, and then analyzing those new ideas using the Inventor's Aide module," Fowlkes said. Fowkes argues that this really benefits both sides of the invention process – product development engineers and IP teams. For the engineers, filing invention disclosures is a very burdensome task. For the patent review committees or IP Counsel, getting clear, concise disclosures, free of jargon and acronyms and complete with documentation of prior art attached, makes the review faster and more efficient. Professor Greg Gdowski, Executive Director of the Center for Medical Technology & Innovation, at the University of Rochester, deployed IQ Ideas Plus to his students earlier this year. According to Gdowski, IQ Ideas Plus is very valuable. “We train our students in carrying out technology landscapes on unmet clinical needs that are observed in our surgical operating rooms. Despite our best efforts, the students always miss technologies that are out there in the form of patent or patent applications. IQ Ideas Plus not only helped us brainstorm additional solutions, but it also revealed existing technologies that would have complicated the solution space had they not been identified." Gdowski said another important advantage of using IQ Ideas Plus was that it helped the team understand the distribution of patents and companies working on technology related to a specific unmet clinical need (or problem). “IQ Ideas Plus gives engineers a new lens by which to evaluate solutions to problems and to execute intellectual property landscapes," Gdowski added. IQ Ideas Plus enables faster idea generation and collaboration, more complete documents for submission and review so the best ideas surface faster allowing great ideas to get to market faster. Greg Gdowski is the IEEE Region 1 Director-Elect Candidate. Dr. William Fowlkes is an IEEE Senior Member.
  • NSF Engineering Alliance to Help Set Course for Research
    Jul 19, 2021 11:00 AM PDT
    THE INSTITUTE A new initiative is expected to give engineers of all disciplines a greater role to play in deciding which research and development projects the U.S. government funds. The National Science Foundation Directorate for Engineering recently launched the Engineering Research Visioning Alliance (ERVA), the first organization of its kind. ERVA received a five-year, US $8 million award from the NSF, which also funds much of the country's research. The alliance is expected to bring together representatives from across the engineering community—including academia, industry, and professional societies—to define and prioritize high-impact fundamental research addressing national, global, and societal needs. ERVA says it will recommend research topics to the NSF it believes the agency ought to fund. The alliance's founding partners include the Big Ten Academic Alliance, the nonprofit EPSCoR (Established Program to Stimulate Competitive Research)/IDeA (Institutional Development Award) Foundation, and the University Industry Demonstration Partnership (UIDP). IEEE is one of more than a dozen professional engineering societies that have joined as affiliate partners. The organizations plan to participate in the alliance's events and have the opportunity to contribute ideas. NSF hopes the initiative will spur advances. “When engineers come together behind a big challenge, we create amazing discoveries and innovations that can lead to exciting new fields," Dawn Tilbury, NSF assistant director for engineering and an IEEE Fellow, said in a news release announcing the alliance. A UNIFIED VOICE IEEE Fellow Barry W.Johnson is the alliance's founding executive director and its co-principal investigator. He says that what is unique about ERVA is that it is bringing in the entire engineering community—not just electrical engineers but also mechanical engineers, civil engineers, biomedical engineers, and others. Rather than having 20 or 30 professional societies speaking on behalf of a technology individually, he says, “we want ERVA to communicate with a consistent, unified voice." Johnson, an engineering professor at the University of Virginia, in Charlottesville, is familiar with how the NSF decides to fund programs. From March 2015 to January 2019 he was appointed to work at the foundation as director of its Division of Industrial Innovation and Partnerships. He also has served as an acting NSF assistant director responsible for the agency's Directorate for Engineering. What is most unusual about ERVA, Johnson says, is its strong emphasis on the participants' diversity, including their different geographic areas and disciplines, as well as being at different career stages. “We'll then supplement them with individuals from the science communities so that we get a true, multidisciplinary group," he says. “We believe that the future is going to reside in multidisciplinary activities." Johnson says the NSF traditionally has focused on ideas “bubbling up from the research community." An individual or organization would submit a proposal to the foundation, which would then vet it for funding. Another method the foundation has used to identify research topics is what Johnson calls visioning sessions. They might include workshops in which participants, including NSF program directors, identify new and emerging topics within a technical area before they become commonplace—for example, quantum computing. ERVA's process will begin with surveys of the research community to help identify potential research themes, Johnson says. The process is likely to include the use of research intelligence based on analyses of publication and patent databases. Once a potential theme is identified, a task force of eight to 12 experts will conduct the visioning process and issue a report to ERVA's leadership that includes recommendations, Johnson says. Once the report is finalized, he adds, it will be shared broadly with the engineering community including university professors and researchers at companies that might want to get involved. “What NSF really wants to accomplish is to be proactive in identifying new and emerging areas so that it achieves its vision to be the global leader in research and innovation," Johnson says. The alliance will engage with other federal agencies that fund basic research, such as the U.S. Departments of Defense and Energy, he says. THE STRUCTURE ERVA's principal investigator is Dorota A.Grejner-Brzezinska, vice president for knowledge enterprise at Ohio State University. In addition to Johnson, the ERVA co-principal investigators are IEEE Fellows Charles Johnson-Bey and Edl Shamiloglu, and UIDP President and CEO Anthony M.Boccanfuso. An advisory board has been established as well as a standing council, which Johnson calls the “intellectual brain trust of the organization." It is expected to help identify technologies the alliance should consider. The three groups met for the first time virtually on 11 June. A video of the meeting is available. Johnson is looking to hire a full-time executive director to oversee the organization and its full-time staff. CALL TO ACTION FOR IEEE MEMBERS Johnson wants IEEE members who are experts in specific technologies to help the alliance with the visioning activity by subscribing to be an ERVA Champion. There are already more than 400. He also calls on IEEE members to provide the alliance with ideas for research themes. “The key thing about the ERVA is getting ideas from the broad engineering community," he says, “with IEEE being a critical component." IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals.
  • Tomorrow's Hydropower Begins With Retrofitting Today's Dams
    Jul 19, 2021 08:00 AM PDT
    With wind and solar prices dropping, it can be easy to forget that two-thirds of the globe’s renewable energy comes from hydropower. But hydro’s future is muddled with ghostly costs—and sometimes dubious environmental consequences. Most dams weren’t, in fact, actually built for hydropower. They help stop floods and supply farms and families with water, but they’re not generating electricity—especially in developing countries. Nearly half of Europe’s large dams are primarily used for hydropower, but fewer than a sixth of Asian dams and a tenth of African dams generate substantial amounts of electricity—according to Tor Haakon Bakken, a civil engineer at the Norwegian University of Science and Technology (NTNU). People like Bakken see such dams as opportunities. He’s one of a few researchers proposing to retrofit old, non-generating dams by installing turbines at their bases. That, he thinks, would create electricity without adding an ecological burden. Bakken’s group and one of his graduate students, Nora Rydland Fjøsne, modeled theoretically doing just that for numerous dams in a part of southern Spain. Their study found that, in many cases, retrofitting was an economically viable approach. Bakken hopes that hydro developers consider retrofitting before building anew. “I think, for the case in Spain, we have proven that this is both a technically and economically viable alternative,” he says. “And I think it’s the case in many other places too.” Even power-generating dams could also be productively retrofitted. The Brazilian Energy Research Office estimates that updating aging hydro plants could add 3 to 11 gigawatts of generating capacity—above and beyond Brazil’s existing 87 GW hydropower base. Meanwhile, scientists at the National Renewable Energy Laboratory (NREL) in the US have proposed using reservoirs as beds for floating solar panels, something they think could theoretically generate terawatts of power. But if you do need to build new hydropower facilities, other scientists believe the best course of action is to stay small: focus on establishing so-called “run-of-the-river” dams, which try to keep the river and its environmental conditions intact. “This kind of turbine, you don’t need to flood large areas,” says Michael Craig, an energy systems researcher at the University of Michigan, and formerly of NREL. Craig and some of his colleagues from NREL and the private-sector Natel Energy modeled a sequence of run-of-the-river dams on a river in California, all linked such that they could be easily controlled. They found that this approach was, like retrofitting dams, economically viable. “The smaller the facility, the more energy you can get out of that, I think is definitely a great strategy,” says Ilissa Ocko, a climate scientist at Environmental Defense Fund. Righting the course of old hydro Hydropower’s behemoth dams of old can singlehandedly rewrite the courses of rivers, creating winding reservoirs in their wake. They can prevent floods and supply water, but they can also displace countless communities upstream and constrict river flow downstream. Such dams disproportionately hurt rural and indigenous people who rely on rivers for a living. Moreover, some hydro plants generate alarming amounts of greenhouse gases. The culprit, scientists now know, are those very reservoirs—and the biological material trapped underneath. “You’re basically flooding a whole area that has all this vegetation on it that now is just decomposing underwater,” says Ocko. The result? Greenhouse gases. On top of carbon dioxide, it can generate methane, which—while not as long-lasting in the atmosphere—is more potent at warming. Reservoirs that are larger in surface area, and reservoirs that have warmer waters—such as those near the equator—are especially prone to burping up copious amounts of methane. Take the Brazilian Amazon, for instance. “Over the last decades, basically all the potential for hydropower expansion that Brazil had close to consuming markets has been exhausted, and Amazonia became the new frontier,” says Dailson José Bertassoli, Jr., a geochemist at the University of São Paulo. But many dam reservoirs in the Amazon are comparable to fossil fuel plants in their capacity to generate greenhouse gases. So too are their counterparts in Western Africa. In fact, one Environmental Defense Fund study found that nearly seven percent of the 1500 hydro plants around the world they examined emit more greenhouse gases per unit energy than fossil fuel plants. That’s ample reason to eschew building new dams and instead look to what we have. “In terms of environmental impacts, it makes sense to focus on areas that already are not in their natural conditions,” says Bertassoli. So, in the face of its mounting environmental costs, does hydro have a future? Yes, say climate scientists, but with a caveat. The key is to minimize the future of those large, greenhouse-gas-excreting reservoirs. “I would never say that we should stay away from hydropower. From a climate perspective, I think we need all the solutions we can get.” says Ocko. But, she adds, “We can’t make an assumption and put it into this bucket of being a renewable energy source just like solar and wind. It’s not.”
  • Google's Quantum Computer Exponentially Suppresses Errors
    Jul 19, 2021 07:01 AM PDT
    In order to develop a practical quantum computer, scientists will have to design ways to deal with any errors that will inevitably pop up in its performance. Now Google has demonstrated that exponential suppression of such errors is possible, experiments that may help pave the way for scalable, fault-tolerant quantum computers. A quantum computer with enough components known as quantum bits or "qubits" could in theory achieve a "quantum advantage" allowing it to find the answers to problems no classical computer could ever solve. However, a critical drawback of current quantum computers is the way in which their inner workings are prone to errors. Current state-of-the-art quantum platforms typically have error rates near 10^-3 (or one in a thousand), but many practical applications call for error rates as low as 10^-15. In addition to building qubits that are physically less prone to mistakes, scientists hope to compensate for high error rates using stabilizer codes. This strategy distributes quantum information across many qubits in such a way that errors can be detected and corrected. A cluster of these "data qubits" can then all count as one single useful "logical qubit." In the new study, Google scientists first experimented with a type of stabilizer code known as a repetition code, in which the qubits of Sycamore — Google's 54-qubit quantum computer — alternated between serving as data qubits and "measure qubits" tasked with detecting errors in their fellow qubits. They arranged these qubits in a one-dimensional chain, such that each qubit had two neighbors at most. The researchers found that increasing the number of qubits their repetition code is built on led to an exponential suppression of the error rate, reducing the amount of errors per round of corrections up to more than a hundredfold when they scaled the number of qubits from 5 to 21. Such error suppression proved stable over 50 rounds of error correction. "This work appears to experimentally validate the assumption that error-correction schemes can scale up as advertised," says study senior author Julian Kelly, a research scientist at Google. However, this repetition code "was limited to looking at quantum errors along just one direction, but errors can occur in multiple directions," says study co-author Kevin Satzinger, a research scientist at Google. As such, they also experimented with a kind of stabilizer code known as a surface code, in which they arranged Sycamore's qubits in a two-dimensional checkerboard pattern of data and measure qubits to detect errors. They found the simplest version of such a surface code — using a two-by-two grid of data qubits and three measure qubits — successfully performed as expected from computer simulations. These findings suggest that if the scientists can reduce the inherent error rate of qubits by a factor of roughly 10 and increase the size of each logical qubit up to about 1,000 data qubits, "we think we can reach algorithmically relevant logical error rates" Kelly says. In the future, the scientists aim to scale up their surface codes to grids of three-by-three or five-by-five data qubits to experimentally test whether or not exponential suppression of error rates also occurs in these systems, Satzinger says. The scientists detailed their findings online July 14 in the journal Nature.

Engineering on Twitter