Created with Sketch.
Singularity Hub Daily
9 minutes | Dec 6, 2020
New Quantum Computer in China Claims Quantum Advantage With Light
Quantum computers are not amazing because they’re revolutionizing everything in the world of computation. They’re still clunky, finicky, insanely expensive, and not too practical. They may very well have a big impact one day—but that day is not today. No, the amazing thing about quantum computers is that they exist at all. For a few decades after these exotic machines were first proposed, no one knew if you could actually even build one. Today, we can say that, yes, quantum computers are possible and that there are a number in working order around the world. Developed by the likes of Google, IBM, and IonQ, quantum computers are decidedly here. Now, the question has become: Will they work as advertised? Can they do calculations much faster than a classical computer ever could? The first experimental proof that the answer to that question is likely “yes” came last year when Google announced its Sycamore quantum computer had achieved quantum advantage—formerly known as quantum supremacy—by completing a specially devised statistical calculation that would have taken a classical supercomputer much more time to run. Now, according to a paper in Science written by a team of computer scientists led by Jian-Wei Pan and Chao-Yang Lu, it appears Google’s feat may have been replicated. And what’s more, it was replicated on a completely different kind of quantum machine. The researchers used a photonic quantum computer—in which calculations are performed by photons of light—to complete a statistical calculation that they say would have taken China’s Sunway TaihuLight supercomputer (the fourth fastest in the world) 2.5 billion years to run. That’s over half the age of the Earth. Their computer handled the task in a little over three minutes. Notably, however, experimentally proving quantum advantage isn’t straightforward. Because it depends on the changing abilities of classical computers and the ingenuity of those coding them, it’s still a moving benchmark. Google’s claim was disputed last year, and this one may encounter challenges too. That’s science for you. A bit more below. First, how does the computer work? Child of Deep Thought In Douglas Adams’ Hitchhiker’s Guide to the Galaxy, an alien species builds the most powerful computer ever to find the answer to life, the universe, and everything. Deep Thought, as the machine is called, takes seven and a half million years to present its confounding answer: 42. The problem, Deep Thought says, is we don’t know the question. So, it designs its successor to find the question to life, the universe, and everything—and that computer is the planet Earth. Earth, and every natural process on it, then goes on to run its calculation for millions of years. The new quantum computer, called Jiŭzhāng, is a bit like the child of Deep Thought. It answers questions about the physical universe by using the natural processes of the universe itself to run those calculations, and then measures the output. In this case, the question is fairly simple and specific. Jiŭzhāng doesn’t look like you’d imagine. It’s not a sleek chip—like all classical computers and many quantum ones too—so much as a quantum Rube Goldberg machine on a lab bench. A simple but effective analogy is to imagine the carnival game where balls are dropped onto a vertical board covered in wooden pegs. The balls bounce off the pegs and come to rest in slots at the bottom. Now swap balls for laser-emitted photons, pegs for beam splitters and mirrors, and slots for photodetectors. In either scenario, you can try to calculate where the balls or photons end up. That is, what’s the distribution in the slots after they’ve all bounced down the board? This is the question Jiŭzhāng was built to answer. The calculation is called boson sampling or, in this case, a variant called Gaussian boson sampling. Boson sampling was originally conceived by Scott Aaronson and Alex Arkhipov in 2011 as an experimental way to prove qua...
4 minutes | Dec 4, 2020
Not All Sunshine and Rainbows: Waymo's Self-Driving Cars Take on Inclement Weather
How do you train self-driving software to safely pilot a vehicle through city streets? There are a few options; the first that probably springs to mind for most people is to spend countless hours actually driving around on city streets (with a safety driver behind the wheel). There’s also the virtual route—it’s far easier to run thousands of hours of simulations on computers than it is to drive around for thousands of hours (and cheaper too!). But Waymo, the Alphabet-owned driverless car company, has come up with a wholly new plan to continue to teach its cars to drive sans-human. The company is building a test site—essentially a fake city—in rural Ohio that will be devoted to fine-tuning its cars’ urban driving skills. Waymo tweeted: 1/2 Waymo has secured two new facilities to advance the #WaymoDriver. First, we’re working with TRC Inc. to co-develop an exclusive, first-of-its-kind testing environment that will model a dense urban environment. Second, we’re opening an R&D facility in Menlo Park, This actually isn’t the first such site; Waymo already has one in Merced, California, as well as an area dedicated to testing heavy-duty trucks in Texas. But the Ohio location of the new site was chosen very intentionally. Until now, self-driving cars have mostly breezed around the sunny streets of places like Arizona and Northern California; not a lot of snow, ice, hail, or even rain there. But Waymo is, of course, hoping people all over the country and in varying climates will eventually want to ride in or own self-driving cars; that means the vehicles need to get better at contending with tough weather conditions. How might heavy snow affect a car’s lidar’s ability to perceive what’s 10 feet in front of it? What happens if a crust of ice freezes over one of the car’s camera lenses? There’s only one way to find out. Even if you take weather out of the equation, driving around a city is much more complicated than highway or rural driving. There are pedestrians, cyclists, delivery trucks, Ubers, and countless other obstacles that pop up in the blink of an eye, and drivers have to make split-second decisions that can add up to the difference between life and death—or just a dented car versus a smooth one. Waymo’s testing site is being built at the Transportation Research Center, a vehicle proving ground and testing center in East Liberty, Ohio, 40 miles northwest of Columbus. TRC is set up to test all sorts of vehicles, from trucks to buses to motorcycles, for everything from safety, fuel economy and emissions to noise, crash simulation, and performance. The center’s website emphasizes that it “offers a world of driving conditions, with wet spring weather, hot summers and icy cold, snowy winters.” So it seems Waymo will have its work cut out for it. The site will include terrain like hills as well as high-speed and dense urban intersections. Although self-driving car technology isn’t advancing as quickly as some experts had predicted it would (and was set back a bit by the pandemic, as everything was), it continues to make incremental progress, as do the regulations around allowing driverless cars to operate. Since mid-2019, the California Public Utilities Commission granted autonomous vehicle pilot permits to seven different companies, Waymo among them. These permits allowed companies to put driverless cars on the road for testing purposes only. But late last month, the commission announced it would allow driverless car companies to offer passenger service and shared rides, and to accept compensation for those rides. If all goes according to plan at the new testing site in Ohio, people may soon feel equally comfortable hailing an autonomous taxi in the middle of a snowstorm as on a clear, sunny day. Image Credit: Waymo
5 minutes | Dec 3, 2020
After 1.5 Billion Years in Flux, Here’s How a New, Stronger Crust Set the Stage for Life on Earth
Our planet is unique in the solar system. It’s the only one with active plate tectonics, ocean basins, continents and, as far as we know, life. But Earth in its current form is 4.5 billion years in the making; it’s starkly different to what it was in a much earlier era. Details about how, when, and why the planet’s early history unfolded as it did have largely eluded scientists, mainly because of the sparsity of preserved rocks from this geological period. Our research, published today in Nature, reveals Earth’s earliest continents were entities in flux. They disappeared and reappeared over 1.5 billion years before finally gaining form. Early Earth: A Strange New World The first 1.5 billion years of Earth’s history were a tumultuous period that set the stage for the rest of the planet’s journey. Several key events took place, including the formation of the first continents, the emergence of land, and the development of the early atmosphere and oceans. All of these events were the result of the changing dynamics of Earth’s interior. They were also catalysts to the first appearances of primitive life. The preserved record of Earth’s first 500 million years is limited to just a few tiny crystals of the mineral zircon. Over the next billion or so years, kilometer-long (and larger) fragments of rock were generated and preserved. These would go on to forge the cores of major continents. Scientists know about the properties of rocks and the chemical reactions that must occur for their constituent minerals to be made. Based on this, we know early Earth boasted very high temperatures, hundreds of degrees hotter than today’s. An Epic Metamorphosis Earth’s crust today is made of thick, buoyant continental crust that stands proud above the sea. Meanwhile, below the oceans are thin but dense oceanic crusts. The planet is also broken into a series of plates that move around in a process called “continental drift.” In some places these plates drift apart, and in others they converge to form mighty mountains. This dynamic movement of Earth’s tectonic plates is the mechanism by which heat from its interior is released into space. This results in volcanic activity focused mainly at the plate boundaries. A good example is the Ring of Fire, a path along the Pacific Ocean where volcanic eruptions and earthquakes are frequent. To unravel the processes that operated on early Earth, we developed computer models to replicate its once much hotter conditions. These conditions were driven by large amounts of internal “primordial heat.” This is the heat left over from when Earth first formed. This internal melting created magma which, through a plumbing system, was thrust out as lava onto the crust. The shallow mantle left behind, dry and rigid, became welded to the crust and formed the first continents. Our modeling shows the release of primordial heat during Earth’s early stages (which was three to four times hotter than today’s) caused extensive melting in the upper mantle. This is the mostly solid region below the crust, between 10km and 100km deep. The Pulse of First Life Our research revealed a lag between the formation of Earth’s first crust and the development of the mantle keels at the base of the first continents. The first formed crust, which was present between 4.5 billion and 4 billion years ago, was weak and prone to destruction. It progressively became stronger over the next billion years to form the core of modern continents. This process was crucial to continents becoming stable. When magma was purged from Earth’s interior, rigid rafts formed in the mantle beneath the new crust, shielding it from further destruction. Moreover, the rise of these rigid continents ultimately led to weathering and erosion, which is when rocks and minerals break down or dissolve over long periods to eventually be carried away and deposited as sediment. Early erosion would have changed the composition of Earth’s atmosphe...
8 minutes | Dec 2, 2020
Breakthrough NASA Study Discovers Surprising Key to Astronauts' Health in Space
Thanks to SpaceX, traveling beyond Earth now seems pretty tangible for us commoners. True, a ticket to the International Space Station currently runs $55 million (ouch). Technologically, however, the triumphant splashdown of SpaceX’s astronaut-carrying Dragon capsule earlier this year shows they have the chops to make commercial space travel work. A casual jaunt over to Mars no longer seems like a fantastical vision statement. Yet getting people safely to space is just one step. Getting them there and back again healthy is another. Since the launch of Laika, the part-husky space pioneer to orbit the Earth in 1957, scientists have known that space travel takes a toll on the human body. The heart and blood vessels stiffen. The risk of cancer increases. The immune system goes haywire. The brain slowly undergoes degeneration in a way similar to aging. Some of these detrimental health effects are reversible once space travelers are back in the grasp of gravity; others linger. We’re still unsure of what health problems stick around long term. But one thing is clear: to keep pushing the next frontiers of space travel, we need effective countermeasures. What if there’s one already in our drug stores? This month, a collaboration between NASA and various research institutions pinpointed a “central biological hub” that controls health during space travel. The culprit is the cell’s energy factory, the mitochondria, which breaks down in function in a way eerily similar to aging. Like shutting down power and water in a city, disruptions to the mitochondria reverberate throughout the cells and organs, potentially leading to problems with sleeping, the immune system, and more in space. The results were published in Cell. “We started by asking whether there is some kind of universal mechanism happening in the body in space...” said lead author Dr. Afshin Beheshti at the Space Biosciences Division of NASA at Ames Research Center in Silicon Valley, California. “What we found over and over was that something is happening with the mitochondria regulation that throws everything out of whack.” It’s terrific news. Because the mitochondria is central to health, we already have various FDA-approved drugs and nutritional supplements to support its function. Like combating aging, however, fighting one biological foe may not be enough to reverse all health issues in space travel. But it’s sure a great place to start. Space Health Goes Big Data The study is part of a package of over two dozen papers published this week using NASA’s GeneLab platform. GeneLab combines two of modern science’s best trends—big data and open-sourcing—into a massive library of data ranging from animal studies to humans. By using big data, the authors explained, it’s possible to fish for key changes in the human body at the molecular level in response to spaceflight. “Such knowledge could be used to design efficient countermeasures that would benefit astronauts and the health of people on Earth,” they write. To start, they tapped into health data from 59 astronauts and hundreds of other tissue samples that were flown into space. It’s a mind-blowingly comprehensive dataset, containing everything from genes to proteins to metabolism. For example, the team could analyze how genes turned on or off during spaceflight—dubbed epigenetics—which provides insight into which of the cell’s processes change in space. It’s similar to trying to analyze a city’s traffic patterns while blinded, but having a wealth of information on each of the cars, how they move, the routes they’re driving, or how fast they’re driving, without any preconception of how traffic should be moving. In this way, the team was able to do an unbiased fishing expedition into a cell’s biological traffic. It paid off. Regardless of whether they looked at human or mouse cells exposed to space, regardless of whether it was kidney, eye, or other tissue, one factor kept popping up: the mitochondria. ...
6 minutes | Dec 1, 2020
As Algorithms Take Over More of the Economy, We Should Cede Control (Very) Carefully
Algorithms play an increasingly prominent part in our lives, governing everything from the news we see to the products we buy. As they proliferate, experts say, we need to make sure they don’t collude against us in damaging ways. Fears of malevolent artificial intelligence plotting humanity’s downfall are a staple of science fiction. But there are plenty of nearer-term situations in which relatively dumb algorithms could do serious harm unintentionally, particularly when they are interlocked in complex networks of relationships. In the economic sphere a high proportion of decision-making is already being offloaded to machines, and there have been warning signs of where that could lead if we’re not careful. The 2010 “Flash Crash,” where algorithmic traders helped wipe nearly $1 trillion off the stock market in minutes, is a textbook example, and widespread use of automated trading software has been blamed for the increasing fragility of markets. But another important place where algorithms could undermine our economic system is in price-setting. Competitive markets are essential for the smooth functioning of the capitalist system that underpins Western society, which is why countries like the US have strict anti-trust laws that prevent companies from creating monopolies or colluding to build cartels that artificially inflate prices. These regulations were built for an era when pricing decisions could always be traced back to a human, though. As self-adapting pricing algorithms increasingly decide the value of products and commodities, those laws are starting to look unfit for purpose, say the authors of a paper in Science. Using algorithms to quickly adjust prices in a dynamic market is not a new idea—airlines have been using them for decades—but previously these algorithms operated based on rules that were hard-coded into them by programmers. Today the pricing algorithms that underpin many marketplaces, especially online ones, rely on machine learning instead. After being set an overarching goal like maximizing profit, they develop their own strategies based on experience of the market, often with little human oversight. The most advanced also use forms of AI whose workings are opaque even if humans wanted to peer inside. In addition, the public nature of online markets means that competitors’ prices are available in real time. It’s well-documented that major retailers like Amazon and Walmart are engaged in a never-ending bot war, using automated software to constantly snoop on their rivals’ pricing and inventory. This combination of factors sets the stage perfectly for AI-powered pricing algorithms to adopt collusive pricing strategies, say the authors. If given free reign to develop their own strategies, multiple pricing algorithms with real-time access to each other’s prices could quickly learn that cooperating with each other is the best way to maximize profits. The authors note that researchers have already found evidence that pricing algorithms will spontaneously develop collusive strategies in computer-simulated markets, and a recent study found evidence that suggests pricing algorithms may be colluding in Germany’s retail gasoline market. And that’s a problem, because today’s anti-trust laws are ill-suited to prosecuting this kind of behavior. Collusion among humans typically involves companies communicating with each other to agree on a strategy that pushes prices above the true market value. They then develop rules to determine how they maintain this markup in a dynamic market that also incorporates the threat of retaliatory pricing to spark a price war if another cartel member tries to undercut the agreed pricing strategy. Because of the complexity of working out whether specific pricing strategies or prices are the result of collusion, prosecutions have instead relied on communication between companies to establish guilt. That’s a problem because algorithms don’t need to communicate to coll...
7 minutes | Nov 30, 2020
Is the Pandemic Spurring a Robot Revolution?
“Are robots really destined to take over restaurant kitchens?” This was the headline of an article published by Eater four years ago. One of the experts interviewed was Siddhartha Srinivasa, at the time Professor of the Robotics Institute at Carnegie Mellon University and currently director of Robotics and AI for Amazon. He said, “I’d love to make robots unsexy. It’s weird to say this, but when something becomes unsexy, it means that it works so well that you don’t have to think about it. You don’t stare at your dishwasher as it washes your dishes in fascination, because you know it’s gonna work every time... I want to get robots to that stage of reliability.” Have we managed to get there over the last four years? Are robots unsexy yet? And how has the pandemic changed the trajectory of automation across industries? The Covid Effect The pandemic has had a massive economic impact all over the world, and one of the problems faced by many companies has been keeping their businesses running without putting employees at risk of infection. Many organizations are seeking to remain operational in the short term by automating tasks that would otherwise be carried out by humans. According to Digital Trends, since the start of the pandemic we have seen a significant increase in automation efforts in manufacturing, meat packing, grocery stores and more. In a June survey, 44 percent of corporate financial officers said they were considering more automation in response to coronavirus. MIT Economist David Autor described the economic crisis and the Covid-19 pandemic as “an event that forces automation.” But he added that Covid-19 created a kind of disruption that has forced automation in sectors and activities with a shortage of workers, while at the same time there has been no reduction in demand. This hasn’t taken place in hospitality, where demand has practically disappeared, but it is still present in agriculture and distribution. The latter is being altered by the rapid growth of e-commerce, with more efficient and automated warehouses that can provide better service. China Leads the Way China is currently in a unique position to lead the world’s automation economy. Although the country boasts a huge workforce, labor costs have multiplied by 10 over the past 20 years. As the world’s factory, China has a strong incentive to automate its manufacturing sector, which enjoys a solid leadership in high quality products. China is currently the largest and fastest-growing market in the world for industrial robotics, with a 21 percent increase up to $5.4 billion in 2019. This represents one third of global sales. As a result, Chinese companies are developing a significant advantage in terms of learning to work with metallic colleagues. The reasons behind this Asian dominance are evident: the population has a greater capacity and need for tech adoption. A large percentage of the population will soon be of retirement age, without an equivalent younger demographic to replace it, leading to a pressing need to adopt automation in the short term. China is well ahead of other countries in restaurant automation. As reported in Bloomberg, in early 2020 UBS Group AG conducted a survey of over 13,000 consumers in different countries and found that 64 percent of Chinese participants had ordered meals through their phones at least once a week, compared to a mere 17 percent in the US. As digital ordering gains ground, robot waiters and chefs are likely not far behind. The West harbors a mistrust towards non-humans that the East does not. The Robot Evolution The pandemic was a perfect excuse for robots to replace us. But despite the hype around this idea, robots have mostly disappointed during the pandemic. Just over 66 different kinds of “social” robots have been piloted in hospitals, health centers, airports, office buildings, and other public and private spaces in response to the pandemic, according to a study from researchers at Po...
5 minutes | Nov 27, 2020
Solar Power Stations in Space Could Be the Answer to Our Energy Needs
It sounds like science fiction: giant solar power stations floating in space that beam down enormous amounts of energy to Earth. And for a long time, the concept—first developed by the Russian scientist Konstantin Tsiolkovsky in the 1920s—was mainly an inspiration for writers. A century later, however, scientists are making huge strides in turning the concept into reality. The European Space Agency has realized the potential of these efforts and is now looking to fund such projects, predicting that the first industrial resource we will get from space is “beamed power.” Climate change is the greatest challenge of our time, so there’s a lot at stake. From rising global temperatures to shifting weather patterns, the impacts of climate change are already being felt around the globe. Overcoming this challenge will require radical changes to how we generate and consume energy. Renewable energy technologies have developed drastically in recent years, with improved efficiency and lower cost. But one major barrier to their uptake is the fact that they don’t provide a constant supply of energy. Wind and solar farms only produce energy when the wind is blowing or the sun is shining—but we need electricity around the clock, every day. Ultimately, we need a way to store energy on a large scale before we can make the switch to renewable sources. Benefits of Space A possible way around this would be to generate solar energy in space. There are many advantages to this. A space-based solar power station could orbit to face the sun 24 hours a day. Earth’s atmosphere also absorbs and reflects some of the sun’s light, so solar cells above the atmosphere will receive more sunlight and produce more energy. But one of the key challenges to overcome is how to assemble, launch, and deploy such large structures. A single solar power station may have to be as much as 10 kilometers squared in area—equivalent to 1,400 football fields. Using lightweight materials will also be critical, as the biggest expense will be the cost of launching the station into space on a rocket. One proposed solution is to develop a swarm of thousands of smaller satellites that will come together and configure to form a single, large solar generator. In 2017, researchers at the California Institute of Technology outlined designs for a modular power station, consisting of thousands of ultralight solar cell tiles. They also demonstrated a prototype tile weighing just 280 grams per square meter, similar to the weight of card. Recently, developments in manufacturing, such as 3D printing, are also being looked at for this application. At the University of Liverpool, we are exploring new manufacturing techniques for printing ultralight solar cells on to solar sails. A solar sail is a foldable, lightweight, and highly reflective membrane capable of harnessing the effect of the sun’s radiation pressure to propel a spacecraft forward without fuel. We are exploring how to embed solar cells on solar sail structures to create large, fuel-free solar power stations. These methods would enable us to construct the power stations in space. Indeed, it could one day be possible to manufacture and deploy units in space from the International Space Station or the future lunar gateway station that will orbit the moon. Such devices could in fact help provide power on the moon. The possibilities don’t end there. While we are currently reliant on materials from Earth to build power stations, scientists are also considering using resources from space for manufacturing, such as materials found on the moon. Another major challenge will be getting the power transmitted back to Earth. The plan is to convert electricity from the solar cells into energy waves and use electromagnetic fields to transfer them down to an antenna on the Earth’s surface. The antenna would then convert the waves back into electricity. Researchers led by the Japan Aerospace Exploration Agency have already deve...
5 minutes | Nov 25, 2020
This Company Wants to Put a Human-Size Hologram Booth in Your Living Room
Over the last several months we’ve gotten very used to communicating via video chat. Zoom, FaceTime, Google Hangouts, and the like have not only replaced most in-person business meetings, they’ve acted as a stand-in for gatherings between friends and reunions between relatives. Just a few short years ago, many of us would have found it strange to think we’d be spending so much time talking to people “face-to-face” while sitting right in our own homes. Now there’s a new technology looming on the horizon that may one day replace video calls with an even stranger-to-contemplate, more futuristic tool: real-time, full-body holograms. Picture this: you’re sitting in your living room having a cup of coffee when the phone-booth-size box in the corner dings, alerting you that you have an incoming call. You accept it, and within seconds your best friend (or your partner, your grandmother, your boss) appears in the box—in the form of millions of points of light engineered to look and sound exactly like the real person. And the real person is on the other end of the line, talking to you in real time as their holographic likeness moves around the box—you can see their gestures, body language, and facial expression just as if they were really there with you. The closest approximation to this that you may have heard about was when a holographic version of the late Tupac Shakur performed at Coachella in 2012. The hologram was simultaneously highly detailed—the lines of Tupac’s washboard abs were clearly defined and visible—and somewhat blurry; after the opening “scene,” in which the hologram stood still, it was hard to see any of Tupac’s facial features. The Tupac hologram was created by events tech company AV Concepts and Hollywood special effects studio Digital Domain, and reportedly cost at least $100,000. It seems holograms don’t come cheap; the afore-mentioned hologram box is currently going for $60,000. The box is called an Epic HoloPortl, and it’s made by PORTL, a company whose founder was inspired by Tupac’s hologram; after seeing the 2012 performance, David Nussbaum quickly bought the patents for the technology that made it possible, and has been working on turning the tech into something useful, fun, and scalable ever since. The Epic has high-resolution transparent LCD screens embedded into its interior walls. The person on the other end—the one appearing as a hologram, that is—just needs to have a camera and be standing against a white background. A camera on the Epic shows the sender the room and people he or she is being beamed to, essentially just like a Zoom call. Last month PORTL raised $3 million in funding, led by Silicon Valley venture capitalist Tim Draper. Nussbaum says he’s sold a hundred Epics, has pre-orders “in excess of a thousand,” and dozens of the devices have already been delivered, with clients including malls, airports, and movie theaters (all places that aren’t very frequented today—but here’s hoping they’ll make a comeback when the pandemic subsides). In fact, PORTL may not have gotten this funding if it weren’t for the pandemic; Nussbaum told TechCrunch that Draper pushed him to expand his vision for the company and its technology when the virus hit, likely anticipating that people will want new ways to communicate from a distance. Few can afford to shell out $60k for a hologram booth, though (not to mention having space for a 7-foot-tall by 5-foot-wide by 2-foot-deep box), and Nussbaum knows it; his next project is to build a smaller, cheaper version of the Epic. Even at a tenth of the current cost, the tech likely wouldn’t see widespread adoption by people wanting their own personal hologram portal at home. But there are many possible use cases beyond person-to-person communication. Any venue or event that would typically hire famous people to appear in person—be they celebrities, academics, religious figures, or business leaders—could beam a hologram of those people in instead. Th...
9 minutes | Nov 24, 2020
Another Win for Senolytics: Fighting Aging at the Cellular Level Just Got Easier
Longevity research always reminds me of the parable of blind men and an elephant. A group of blind men, who’ve never seen an elephant before, each touches a different part of the elephant’s body to conceptualize what the animal is like. Because of their limited experience, each person has widely different ideas—and they all believe they’re right. Aging, thanks to its complexity, is the biomedical equivalent of the elephant. For decades, researchers have focused on one or another “hallmark” of aging, with admirable success. For example, we now know that energy production in aging cells goes haywire. Immune responses ramp up, stewing aging tissue in a soup of inflammatory molecules. Dying cells turn into zombie-like “senescent cells,” where they abdicate their normal functions and instead pump out chemicals that further contribute to inflammation and damage. Yet how these hallmarks fit together into a whole picture remained a mystery. Now, thanks to a new study published in Nature Metabolism, we’re finally starting to connect the dots. In mice, the study linked up three promising anti-aging pathways—battling senescent cells, inflammation, and wonky energy production in cells—into a cohesive detective story that points to a master culprit that drives aging. Spoiler: senolytics, the drug that wipes out senescent cells and a darling candidate for prolonging healthspan, may also have powers to rescue energy production in cells. Let’s meet the players. From Metabolism to Zombie Cells Individual cells are like tiny cities with their own power plants to keep them running. One “celebrity” molecular worker in the process of generating energy is nicotinamide adenine dinucleotide (NAD). It’s got a long name, but an even longer history and massive fame. Discovered in 1906, NAD is a molecule that’s critical for helping the cell’s energy factory, the mitochondria, churn out energy. NAD is a finicky worker that appears on demand—the cell will make more if it needs more; otherwise, extra molecules are destroyed (harsh, I know). As we age, our cells start losing NAD. Without the critical worker, the mitochondria factory goes out of whack, which in turn knocks the cell’s normal metabolism into dysfunction. At least, that’s the story in mice. Although yet unproven for slowing aging or age-related disorders in humans, NAD boosters are already making a splash in the supplement world, raising even more need to understand how and why NAD levels drop as we age. Giving NAD a run for its anti-aging fame are senolytics, a group of chemicals that destroy senescent “zombie” cells. These frail, beat-up cells are oddities: rather than dying from DNA damage, they turn to the dark side, staying alive but leaking an inflammatory cesspool of molecules called SASP (senescence associated secretory phenotype) that “spread” harm to their neighbors. A previous study in ancient mice, the equivalent of a 90-year-old human, found that wiping out these zombie cells with two simple drugs increased their lifespan by nearly 40 percent. Others using a genetic “kill switch” in mice found that destroying just half of zombie cells helped the mice live 20 percent longer, while having healthier kidneys, stronger hearts, luscious fur, and perkier energy levels. Similar to NAD supplements, pharmaceutical companies are investigating over a dozen potential senolytics in a race to bring one to market. But what if we can combine the two? A Hub for Aging The new study, led by aging detectives Drs. Judith Campisi and Eric Verdin at the Buck Institute for Research on Aging in Novato, California, asked if we can connect the line between NAD and zombie cells, like suspects on an evidence board. Their “lightbulb” clue was a third molecule of interest, highlighted in a 2016 study. Meet CD38, a molecule that plays double roles as an aging culprit. It wreaks havoc as an immune molecule to boost inflammation, while chewing up and destroying NAD. If CD38 is a new dru...
5 minutes | Nov 23, 2020
MIT Report: Robots Aren’t the Biggest Threat to the Future of Work—Policy Is
Fears of a robot-driven jobs apocalypse are a recurring theme in the media. But a new report from MIT has found that technology is creating as many jobs as it destroys, and bad policy is a bigger threat to workers than automation. Ever since a landmark 2013 paper from the University of Oxford estimated that 47 percent of US jobs were at risk of automation, there’s been growing concern about how technology will shape the future of work. Last year another influential study reported that robots could replace up to 20 million jobs by 2030. But after three years of research, the final report from MIT’s Task Force on the Work of the Future says we’re actually facing a gradual technological evolution, not a robot revolution. Nonetheless, without major overhauls to the political and economic systems we’ve built around that technology, outcomes for workers don’t look promising. “The 21st century will see a rising tide of new technologies, some of which are now emerging and some of which will surprise us,” the report’s authors write. “If those technologies deploy into the labor institutions of today that were designed for the last century, we will see familiar results: stagnating opportunity for the majority of workers accompanied by vast rewards for a fortunate minority.” The headline figure from the report is that about 63 percent of jobs performed in 2018 did not exist in 1940, suggesting that even as technology makes some jobs obsolete, many new ones are being created. At the same time, the overall percentage of adults in paid employment has risen for over a century. This mirrors the findings of another prominent report released by the World Economic Forum last year, which found that while automation will disrupt 85 million jobs globally by 2025, it will also create 97 million new ones. But the MIT report also acknowledges that while fears of an imminent jobs apocalypse have been over-hyped, the way technology has been deployed over recent decades has polarized the economy, with growth in both white-collar work and low-paid service work at the expense of middle-tier occupations like receptionists, clerks, and assembly-line workers. This is not an inevitable consequence of technological change, though, say the authors. The problem is that the spoils from technology-driven productivity gains have not been shared equally. The report notes that while US productivity has risen 66 percent since 1978, compensation for production workers and those in non-supervisory roles has risen only 10 percent. “People understand that automation can make the country richer and make them poorer, and that they’re not sharing in those gains,” economist David Autor, a co-chair of the task force, said in a press release. “We need to restore the synergy between rising productivity and improvements in labor market opportunity.” At the heart of the problem, say the authors, is a lack of protection for workers in the form of things like minimum wages, sick leave, notice periods, and collective bargaining rights. Atrophy in training programs in both the public and private sector are also making it harder for workers disrupted by technology to adapt. The report lays out three main planks for rectifying the situation, starting with serious investment and innovation in training, particularly within companies and at community colleges. The second priority should be to improve the position of workers by strengthening labor laws, revamping unemployment insurance, and setting the federal minimum wage to at least 40 percent of the national median wage and indexing it against inflation. Finally, the authors say, we need to redirect innovation towards socially beneficial outcomes and augmenting rather than replacing workers. To do that, federal research spending should be boosted and directed toward areas that tend to be neglected by the private sector as well as shared out more equitably around the country. The current tax code, which unduly fav...
8 minutes | Nov 22, 2020
The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust
The history of computer chips is a thrilling tale of extreme miniaturization. The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world. At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping. The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade. So can Goliath flip the script on David? Cerebras is on a mission to find out. Big Chips Beyond AI When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models. Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI. So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design. The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task. The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.” The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores. A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars. While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design. Cut the Commute Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores. Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better. Most large-scale computing tasks depend on massively parallel proces...
22 minutes | Nov 20, 2020
Who Should Get a Covid-19 Vaccine First?
If the book of nature is written in the language of mathematics, as Galileo once declared, the Covid-19 pandemic has brought that truth home for the world’s mathematicians, who have been galvanized by the rapid spread of the coronavirus. So far this year, they have been involved in everything from revealing how contagious the novel coronavirus is, how far we should stand from each other, how long an infected person might shed the virus, how a single strain spread from Europe to New York and then burst across America, and how to “flatten the curve” to save hundreds of thousands of lives. Modeling also helped persuade the Centers for Disease Control and Prevention that the virus can be airborne and transmitted by aerosols that stay aloft for hours. And at the moment many are grappling with a particularly urgent—and thorny—area of research: modeling the optimal rollout of a vaccine. Because vaccine supply will be limited at first, the decisions about who gets those first doses could save tens of thousands of lives. This is critical now that promising early results are coming in about two vaccine candidates—one from Pfizer and BioNTech and one from Moderna—that may be highly effective and for which the companies may apply for emergency authorization from the Food and Drug Administration. But figuring out how to allocate vaccines—there are close to 50 in clinical trials on humans —to the right groups at the right time is “a very complex problem,” says Eva Lee, director of the Center for Operations Research in Medicine and Health Care at the Georgia Institute of Technology. Lee has modeled dispensing strategies for vaccines and medical supplies for Zika, Ebola, and influenza, and is now working on Covid-19. The coronavirus is “so infectious and so much more deadly than influenza,” she says. “We have never been challenged like that by a virus.” Howard Forman, a public health professor at Yale University, says “the last time we did mass vaccination with completely new vaccines,” was with smallpox and polio. “We are treading into an area we are not used to.” All the other vaccines of the last decades have either been tested for years or were introduced very slowly, he says. Because Covid-19 is especially lethal for those over 65 and those with other health problems such as obesity, diabetes, or asthma, and yet is spread rapidly and widely by healthy young adults who are more likely to recover, mathematicians are faced with two conflicting priorities when modeling for vaccines: Should they prevent deaths or slow transmission? The consensus among most modelers is that if the main goal is to slash mortality rates, officials must prioritize vaccinating those who are older, and if they want to slow transmission, they must target younger adults. “Almost no matter what, you get the same answer,” says Harvard epidemiologist Marc Lipsitch. Vaccinate the elderly first to prevent deaths, he says, and then move on to other, healthier groups or the general population. One recent study modeled how Covid-19 is likely to spread in six countries—the US, India, Spain, Zimbabwe, Brazil, and Belgium—and concluded that if the primary goal is to reduce mortality rates, adults over 60 should be prioritized for direct vaccination. The study, by Daniel Larremore and Kate Bubar of the University of Colorado Boulder, Lipsitch, and their colleagues, has been published as a preprint, meaning it has not yet been peer reviewed. Of course, when considering Covid-19’s outsized impact on minorities—especially Black and Latino communities—additional considerations for prioritization come into play. Most modelers agree that “everything is changing with coronavirus at the speed of light,” as applied mathematician Laura Matrajt, a research associate at the Fred Hutchinson Cancer Research Center in Seattle, put it in an email. That includes our understanding of how the virus spreads, how it attacks the body, how having another disease at the same time mig...
4 minutes | Nov 19, 2020
A 3D Printed Apartment Building Is Going Up in Germany
3D printing is making strides in the construction industry. In just a couple years we’ve seen the tech produce single-family homes in a day, entire communities of homes for people in need, large municipal buildings, and even an egg-shaped concept home for Mars (oh, and let’s not forget the 3D printed boats and cars of the world!). Another impressive feat will soon be added to this list, as a German construction company has begun work on a 3D printed three-story apartment building. Located north-west of Munich in a town called Wallenhausen, when completed the building will have five separate apartments and a total square footage of 4,090. The project is a collaboration between PERI Group, a German supplier of formwork and scaffolding systems, and COBOD, a Danish company that makes modular 3D printers for construction (and recently partnered with GE to 3D print the bases of 650-foot-tall wind turbines!). The printer being used for the apartment building is called the BOD2. It’s the second iteration of the company’s original printer, called the BOD, which was used to print a small, free-standing office building in Copenhagen near one of the city’s harbors. BOD2 is a gantry-style printer that moves between three axes on a metal frame. The material being used for construction has been dubbed “i.tech 3D.” It’s a cement mixture developed specifically for use in 3D printing by German multinational building materials company HeidelbergCement. BOD2 doesn’t do its thing at the push of a button; it needs two human operators to run it, and also has cameras that constantly monitor its activity. COBOD claims the BOD2 is the fastest 3D construction printer on the market, laying down one meter’s worth of material per second—and leaving space for plumbing and electrical pipes in the process. There is at least one 3D printed apartment building already in existence: a six-story building in Shanghai was unveiled in 2015 by Winsun Decoration Design Engineering Co. An important difference between that building and the German one is that the walls of the Shanghai building were printed offsite then assembled at the building location, whereas for the Wallenhausen building, the printer will lay down its cement mix on site, no subsequent moving or assembly required. Unlike many other recent 3D printing construction projects, the apartment building isn’t a proof of concept or a demo—it’s a for-profit endeavor, and the apartments will be rented at market rates as soon as they’re completed. “We are very confident that 3D construction printing will become increasingly important in certain market segments over the coming years and has considerable potential,” said Thomas Imbacher, PERI’s director of innovation and marketing. “By printing the first apartment building on-site, we are demonstrating that this new technology can also be used to print large-scale dwelling units.” Bringing 3D printing into the mainstream of the construction industry is a worthwhile goal, because the technology saves on all kinds of costs while still yielding safe, durable structures. It’s also much faster than traditional construction methods, making it a prime tool for situations like disaster relief where dwellings need to go up quickly and cheaply. The printing process for the apartment building is expected to take six weeks. If its creators’ vision plays out, it will be just the first of many such buildings in Germany and elsewhere; in just a few short years, living in a 3D printed apartment may be so common that it won’t even earn you any bragging rights. Image Credit: COBOD
4 minutes | Nov 18, 2020
McDonald's Is Making a Plant-Based Burger; You Can Try It in 2021
Fast-food chains have been doing what they can in recent years to health-ify their menus. For better or worse, burgers, fries, fried chicken, roast beef sandwiches, and the like will never go out of style—this is America, after all—but consumers are increasingly gravitating towards healthier options. One of those options is plant-based foods, and not just salads and veggie burgers, but “meat” made from plants. Burger King was one of the first big fast-food chains to jump on the plant-based meat bandwagon, introducing its Impossible Whopper in restaurants across the country last year after a successful pilot program. Dunkin’ (formerly Dunkin’ Donuts) uses plant-based patties in its Beyond Sausage breakfast sandwiches. But there’s one big player in the fast food market that’s been oddly missing from the plant-based trend—until now. McDonald’s announced last week that it will debut a sandwich called the McPlant in key US markets next year. Unlike Dunkin’ and Burger King, who both worked with Impossible Foods to make their plant-based products, McDonald’s worked with Los Angeles-based Beyond Meat, which makes chicken, beef, and pork-like products from plants. According to Bloomberg, though, McDonald’s decided to forego a partnership with Beyond Meat in favor of creating its own plant-based products. Imitation chicken nuggets and plant-based breakfast sandwiches are in its plans as well. McDonald’s has bounced back impressively from its March low (when the coronavirus lockdowns first happened in the US). Last month the company’s stock reached a 52-week high of $231 per share (as compared to its low in March of $124 per share). To keep those numbers high and make it as easy as possible for customers to get their hands on plant-based burgers and all the traditional menu items too, the fast food chain is investing in tech and integrating more digital offerings into its restaurants. McDonald’s has acquired a couple artificial intelligence companies in the last year and a half; Dynamic Yield is an Israeli company that uses AI to personalize customers’ experiences, and McDonald’s is using Dynamic Yield’s tech on its smart menu boards, for example by customizing the items displayed on the drive-thru menu based on the weather and the time of day, and recommending additional items based on what a customer asks for first (i.e. “You know what would go great with that coffee? Some pancakes!”). The fast food giant also bought Apprente, a startup that uses AI in voice-based ordering platforms. McDonald’s is using the tech to help automate its drive-throughs. In addition to these investments, the company plans to launch a digital hub called MyMcDonald’s that will include a loyalty program, start doing deliveries of its food through its mobile app, and test different ways of streamlining the food order and pickup process—with many of the new ideas geared towards pandemic times, like express pickup lanes for people who placed digital orders and restaurants with drive-throughs for delivery and pickup orders only. Plant-based meat patties appear to be just one small piece of McDonald’s modernization plans. Those of us who were wondering what they were waiting for should have known—one of the most-recognized fast food chains in the world wasn’t about to let itself get phased out. It seems it will only be a matter of time until you can pull out your phone, make a few selections, and have a burger made from plants—with a side of fries made from more plants—show up at your door a little while later. Drive-throughs, shouting your order into a fuzzy speaker with a confused teen on the other end, and burgers made from beef? So 2019. Image Credit: McDonald’s
9 minutes | Nov 17, 2020
This Is How We’ll Engineer Artificial Touch
Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.” Answer: “What is the human hand?” Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel. So why can’t robots do the same? In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicone. Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand. Here’s what Sundaram laid out to get us to that future. How Does Touch Work, Anyway? Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain. The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.” At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in. The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance. It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different. In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object. Biology to Machine Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia en...
5 minutes | Nov 16, 2020
90% of the Global Power Capacity Added in 2020 Was Renewable
There’s been plenty of hand-wringing about the potential for the Covid-19 pandemic to distract from the ongoing fight against climate change. But the latest data from the International Energy Agency (IEA) shows promising signs that a “green recovery” may be materializing. Right from the start of the crisis there were fears that the coronavirus would derail fragile efforts to reduce global carbon emissions by pulling politicians’ attention away from the cause or tempting governments to throw out environmental standards to give a shot in the arm to their pandemic-ravaged economies. At the same time, more optimistic voices said this was the perfect time to double down on our efforts to decarbonize. With many governments eager to spend money to pull their countries out of recession, research from leading economists showed that focusing on green investments could provide a double win for both economies and the environment. Since then various governments have pledged to focus on a “green recovery,” but recent research from The Guardian suggests that many are not living up to their promises, with recovery packages pumping billions of dollars into environmentally-damaging projects. But despite this, a study from the IEA shows promising signs that renewables have fared far better than fossil fuels during the pandemic. While the crisis sparked sharp declines in oil, gas, and coal, the Renewables 2020 report found that carbon-free electricity will account for almost 90 percent of the total power capacity added this year, and the pace is set to accelerate in 2021. “Renewable power is defying the difficulties caused by the pandemic, showing robust growth while other fuels struggle,” IEA Executive Director Dr. Fatih Birol said in a press release. “The future looks even brighter with new capacity additions on course to set fresh records this year and next.” This year’s record growth has been driven by the US and China, with wind and solar set to expand by 30 percent in both countries. Next year, global renewable capacity is set to expand by 10 percent thanks to the European Union and India, with the latter set to double its 2020 expansion. Overall, this year’s additions will see renewable generation increase by seven percent, despite a five percent drop in energy demand. One of the most promising signs that this isn’t just a flash in the pan is the fact that stocks in renewable power equipment manufacturers and project developers have been outperforming the overall energy sector as well as major stock market indices. Solar companies in particular are doing well, with their total share price more than doubling since December 2019. That probably shouldn’t be much of a surprise; another IEA report released last month found that solar is now the cheapest form of electricity to build in history. The report did highlight a couple of weak spots for the industry, though. Heating, for both industrial and domestic purposes, remains the single greatest use of energy worldwide, and modern renewables account for only 11 percent, with the rest dominated by fossil fuels. Renewable biofuels used in transport, which accounts for 30 percent of total energy use, have also suffered due to reduced demand as economies shrink and fossil fuel prices fall. Despite the momentum behind renewables, the authors say that continued support from policymakers will be crucial to maintaining it. A number of key incentives for renewables developers in major markets are set to expire soon, and without clarity from governments over whether or not they will be renewed 2022 could see a small dip in expansion. But overall, the authors expect sustained policy support for renewables over the next five years, which combined with continued cost reductions should result in strong growth. They expect total wind and solar capacity to exceed that of natural gas in 2023 and coal in 2024. “In 2025, renewables are set to become the largest source of electricity gene...
6 minutes | Nov 13, 2020
Smart Concrete Could Pave the Way for High-Tech, Cost-Effective Roads
Every day, Americans travel on roads, bridges, and highways without considering the safety or reliability of these structures. Yet much of the transportation infrastructure in the US is outdated, deteriorating, and badly in need of repair. Of the 614,387 bridges in the US, for example, 39 percent are older than their designed lifetimes, while nearly 10 percent are structurally deficient, meaning they could begin to break down faster or, worse, be vulnerable to catastrophic failure. The cost to repair and improve nationwide transportation infrastructure ranges from nearly US$190 billion to almost $1 trillion. Repairing US infrastructure costs individual households, on average, about $3,400 every year. Traffic congestion alone is estimated to cost the average driver $1,400 in fuel and time spent commuting, a nationwide tally of more than $160 billion per year. I am a professor in the Lyles School of Civil Engineering and the director of the Center for Intelligent Infrastructures at Purdue University. My co-author, Vishal Saravade, is part of my team at the Sustainable Materials and Renewable Technology (SMART) Lab. The SMART Lab researches and develops new technologies to make American infrastructure “intelligent,” safer, and more cost-effective. These new systems self-monitor the condition of roads and bridges quickly and accurately and can, sometimes, even repair themselves. Smart, Self-Healing Concrete Infrastructure—bridges, highways, pavement—deteriorates over time with continuous use. The life of structures could be extended, however, if damages were monitored in real time and fixed early on. In the northern US, for example, freeze-thaw cycles in winter cause water to seep into the pavement where it freezes, expands, and enlarges cracks, which can cause significant damage. If left unrepaired, this damage may propagate and break down pavements and bridges. Such damage can be identified and repaired autonomously. At an early stage of a crack, for example, self-healing pavement would activate super absorbent polymers to absorb water and produce concrete-like material that fills in the crack. Cracks as small as a few microns could be healed to prevent significant damage by preventing or delaying the later stages of the freeze-thaw cycle. Roadway Technology Many researchers in the world are working on improving construction infrastructure. Technologies recently being explored include solar and energy-harvesting roads, charging lanes for electric vehicles, smart streetlights, and reducing carbon-related emissions from construction materials. At the Purdue SMART Lab, our team is also testing novel sensors that monitor transportation infrastructure by embedding them in several Indiana interstate highways. We plan to expand to other state highway systems in the next few years with a goal to better accommodate increased traffic and provide accurate estimates of road conditions during construction and its life. Sensors embedded in concrete pavement acquire information about the infrastructure’s health condition in real time and communicate the data to computers. Electrical signals are applied through the sensors. Concrete’s vibrations are converted into electrical signals that are read and analyzed by lab-built customized software. This enables transportation engineers to make effective and data-driven decisions from opening roads to traffic and to proactively identifying issues that cause damage or deterioration. After concrete is poured for highway pavement, for example, it takes hours to cure and become strong enough to open for traffic. The timing of when to open a highway depends on when the concrete mix is cured. If a roadway opens too early and the concrete is undercured, it can reduce the life expectancy of the pavement and increase maintenance costs. Waiting too long to open a road can result in traffic delays, congestion, and increased safety risks for construction workers and commuters. Curing conc...
2 minutes | Nov 12, 2020
How an Ownership Economy Could Make Internet Platforms Work for Everyone
The way we transact with each other and conduct business has changed a lot in the last 10 years. For some of us it’s hard to remember how we used to find rides or places to stay before Uber, Lyft, and Airbnb existed. Industries like hospitality, retail, transportation, and even human resources have been digitized and platformized. Though the average consumer has benefited, it’s not been all roses. Increasingly, critics have noted an imbalance in the benefits going to the creators of these platforms versus those who use their products or services. Part of the problem is the way “internet middlemen”—platforms like Airbnb or Upwork that connect buyers and sellers directly, for a fee—are structured and funded. Venture capitalists throw money at promising startups like there’s no tomorrow, and at first, all is well: investment dollars help companies keep costs low to attract users. More and more people on both sides of the transaction adopt the service because the costs are low and the benefits high. But that upward trajectory can’t go on forever, and it’s only a matter of time before investors come knocking. When they do, companies tend to shift their focus from users back to the people who gave them the money to make it all happen in the first place. The pressures of profitability bring data harvesting, targeted ads, higher fees, lower compensation. Whatever it takes. There’s clearly value in platforms—but is there a better business model? In a new video from Futurism Media and Singularity University’s IdeaFront, we glimpse a very different future. One in which blockchain-based cooperatives replace today’s mega-corporate platform model. The middlemen are out. Users decide the values of their platforms and directly benefit from the gains of growth. In the shadow of today’s dominant platforms, it may seem like a pipe dream, but there are already examples of successful businesses fighting the current. Will a new model catch on? Image Credit: IdeaFront
4 minutes | Nov 12, 2020
You Can Buy This Electric Car for $7,999 in California
A tiny electric car that costs just $4,200 has been all the rage in China this year. The Wuling Hong Guang Mini EV generated over 15,000 orders within 20 days of its release in July, and added another 35,000 to that in August, beating out Chinese orders for Tesla Model 3s in the same period. Now another small, affordable Chinese electric car is set to make its debut on American roads—Californian roads, to be exact. Last week the California Air Resources Board certified that Kandi America, the US subsidiary of Chinese battery and electric car maker Kandi Technologies Group Inc., met the state’s emissions standards. How affordable are we talking, you ask? The listed price of the model K27 is $17,499. But once you take into account California’s $2,000 electric vehicle incentive credit and a federal tax credit of $7,500, that leaves you at a total cost of $7,999 (if you live in California). The sleeker-looking model K23 has a sticker price of $27,499. The K23 has a better battery—41 kilowatt hours versus the K27’s 17 kilowatt hours—but it takes a little longer to charge and gets fewer miles per gallon in both city and highway driving. The more expensive model is also a tad larger, though both cars are listed as being able to seat four adults. “This certification comes at a great time for Kandi America as the infrastructure put forth by state elected officials, including the Governor’s recent executive order banning sales of new gasoline- and diesel-powered cars and trucks by 2035, requires new quality, affordable products to enter the market quickly,” said CEO of Kandi America Johnny Tai. The order he’s referring to was signed by California Governor Gavin Newsome in late September; it bans the sale of new gas cars and trucks by 2035. Used cars of this sort will still be able to be bought and sold, though. Though transportation is California’s biggest source of carbon emissions, making up 40 percent of the total (though to be fair, that percentage includes emissions from aircraft and boats too), the order was met with mixed reactions; a senior economist at the Institute for Energy Research pointed out that “Electric cars might not have emissions at a tailpipe, but they do have emissions at the power plant.” Electric cars accounted for less than eight percent of all new cars sold in California last year. Eight percent sounds low, but Californians are actually ahead of the curve when you compare this figure to global EV sales, which stood at just three percent of the total auto market through 2019. Despite the extensive economic losses many people have suffered as a result of the coronavirus pandemic, car sales are actually doing all right; people have been saving more of their disposable income than they normally do (because let’s face it, there’s not many places or reasons to drop a lot of cash these days), interest rates are low, and people don’t want to fly or take public transportation. This all adds up to money in the bank for car makers (and people looking to sell their used cars). Kandi cars could fare well in California’s eco-conscious market; they provide a budget-friendly alternative to the Teslas and Priuses that seem to be flooding the state’s roads. Kandi cars may not be as sexy as Teslas, but they do have some modern, tech-y features like a touchscreen, backup camera, and bluetooth. The company is currently taking pre-orders in exchange for a refundable $100 deposit. Image Credit: Kandi America
7 minutes | Nov 11, 2020
Why We Need a Collective Vision to Design the Future of Health
My mother died of Covid-19 at the age of 91. She was recovering from surgery in an assisted-living facility in Durham, North Carolina. While she had previously been healthy, the virus aggressively invaded her body, and her doctors soon told us there was nothing they could do. We were allowed one brief compassion visit. My sisters and I suited up in head-to-toe PPE to be with my mother for 20 minutes. She was unresponsive, but I hope she knew we were there. She passed away a few days later. My family’s heartbreak is not unique. We share it with more than 200,000 American families. In communities of color, this heartbreak is not even uncommon. Discrimination, lack of access to health care, and poverty all increase the risks for Black and brown people. In my own Black, mostly working-class family, almost everyone has a friend or in-law who has died from Covid. But this heartbreak is avoidable. While coronavirus has proven devastating around the world, it has taken a particularly deadly toll on the United States. Our elected leaders bear much responsibility for this, but our deeply flawed health system is also to blame. A system that prioritizes top-notch healthcare for a small number of people rather than prevention and quality care for everyone will never perform well in a public health crisis. The US spent $3.6 trillion on health care in 2018, but Americans live shorter, sicker lives than people in many other wealthy countries that spend a fraction of what we do. We also have glaring health inequities that leave Black, brown, and Indigenous people, and folks who live in under-resourced rural areas, with greater health threats and lower life expectancies. Covid has alerted us to a pressing need: It’s time to design a future for US health that’s centered on people. A Collective Vision Systems change is a big but worthy endeavor. Changing a system means anticipating what it might have to deal with in the next 20, 50, or 100 years. FORESIGHT, a nationwide health initiative I help lead, recently released a scan of many health trends and emergent issues. They include the increased prevalence of technologies like AI, gene editing, and digital fabrication. The future will also hold recurring pandemics, climate crises, and shifting racial demographics. We brought a diverse group of people together to review these trends and emergent issues and develop scenarios of what the future could look like. These scenarios aren’t intended to be predictions, rather they’re supposed to kickstart a collective imagining of how today’s trends and technologies could influence the future. One future finds the “pandemics of the 2020s” have driven many city dwellers to leave their hectic lives and settle in self-selected kinship groups called Eco-Hubs. These Hubs offer access to a better quality of life, shared and local ownership, slow growth, and environmental sustainability. Collaboration with government and corporations gives Hub residents access to advanced technologies, including education robots that serve youth and well-being robots that fill health needs. People living in Hubs both fear and rely on city cultures and big manufacturing and agriculture systems; their relationship with the outside world is always delicate. Another scenario explores swapping out economic benchmarks like GDP for people-centric benchmarks measuring human well-being. Here, viewing the costs and benefits of our system through a different lens incentivizes stronger public health, housing, and education systems. In a particularly automated future, in which corporations have become collectives, health care includes on-demand services linked to health data from sensors in watches; drones deliver medicine; and health is more strongly connected to food and land with a Food-Land-Health partnership between large food companies and family farmers. In a darker scenario, environmental breakdown forces people to leave southern and coastal regions for colder, mid...
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021