MIT News – Artificial intelligence

0 71

MIT News – Artificial intelligence MIT news feed about: Artificial intelligence en Fri, 21 Aug 2020 00:00:00 -0400 Real-time data for a better response to disease outbreaks The startup Kinsa uses its smart thermometers to detect and track the spread of contagious illness before patients go to the hospital. Fri, 21 Aug 2020 00:00:00 -0400 Zach Winn | MIT News Office <p>Kinsa was founded by MIT alumnus Inder Singh MBA ’06, SM ’07 in 2012, with the mission of collecting information about when and where infectious diseases are spreading in real-time. Today the company is fulfilling that mission along several fronts.</p> <p>It starts with families. More than 1.5 million of Kinsa’s&nbsp;“smart” thermometers have been sold or given away across the country, including hundreds of thousands to families from low-income school districts. The thermometers link to an app that helps users decide if they should seek medical attention based on age, fever, and symptoms.</p> <p>At the community level, the data generated by the thermometers are anonymized and aggregated, and can be shared with parents and school officials, helping them understand what illnesses are going around and prevent the spread of disease in classrooms.</p> <p>By working with over 2,000 schools to date in addition to many businesses, Kinsa has also developed predictive models that can forecast flu seasons each year. In the spring of this year, <a href=”″ target=”_blank”>the company showed</a> it could predict flu spread 12-20 weeks in advance at the city level.</p> <p>The milestone prepared Kinsa for its most profound scale-up yet. When Covid-19 came to the U.S., the company was able to estimate its spread in real-time by tracking fever levels above what would normally be expected. Now Kinsa is working with health officials in five states and three cities to help contain and control the virus.</p> <p>“By the time the CDC [U.S. Centers for Disease Control] gets the data, it has been processed, deidentified, and people have entered the health system to see a doctor,” say Singh, who is Kinsa’s CEO as well as its founder. “There’s a huge delay from when someone contracts an illness and when they see a doctor. The current health care system only sees the latter; we see the former.”</p> <p>Today Kinsa finds itself playing a central role in America’s Covid-19 response. In addition to its local partnerships, the company has become a central information hub for the public, media, and researchers with its Healthweather tool, which maps unusual rates of fevers — among the most common symptom of Covid-19 — to help visualize the prevalence of illness in communities.</p> <p>Singh says Kinsa’s data complement other methods of containing the virus like testing, contact tracing, and the use of face masks.</p> <p><strong>Better data for better responses</strong></p> <p>Singh’s first exposure to MIT came while he was attending the Harvard University Kennedy School of Government as a graduate student.</p> <p>“I remember I interacted with some MIT undergrads, we brainstormed some social-impact ideas,” Singh recalls. “A week later I got an email from them saying they’d prototyped what we were talking about. I was like, ‘You prototyped what we talked about in a week!?’ I was blown away, and it was an insight into how MIT is such a do-er campus. It was so entrepreneurial. I was like, ‘I want to do that.’”</p> <p>Soon Singh enrolled in the Harvard-MIT Program in Health Sciences and Technology, an interdisciplinary program where Singh earned his master’s and MBA degrees while working with leading research hospitals in the area. The program also set him on a course to improve the way we respond to infectious disease.</p> <p>Following his graduation, he joined the Clinton Health Access Initiative (CHAI), where he brokered deals between pharmaceutical companies and low-resource countries to lower the cost of medicines for HIV, malaria, and tuberculosis. Singh described CHAI as a dream job, but it opened his eyes to several shortcomings in the global health system.</p> <p>“The world tries to curb the spread of infectious illness with almost zero real-time information about when and where disease is spreading,” Singh says. “The question I posed to start Kinsa was ‘how do you stop the next outbreak before it becomes an epidemic if you don’t know where and when it’s starting and how fast it’s spreading’?”</p> <p>Kinsa was started in 2012 with the insight that better data were needed to control infectious diseases. In order to get that data, the company needed a new way of providing value to sick people and families.</p> <p>“The behavior in the home when someone gets sick is to grab the thermometer,” Singh says. “We piggy-backed off of that to create a communication channel to the sick, to help them get better faster.”</p> <p>Kinsa started by selling its thermometers and creating a sponsorship program for corporate donors to fund thermometer donations to Title 1 schools, which serve high numbers of economically disadvantaged students. Singh says 40 percent of families that receive a Kinsa thermometer through that program did not previously have any thermometer in their house.</p> <p>The company says its program has been shown to help schools improve attendance, and has yielded years of real-time data on fever rates to help compare to official estimates and develop its models.</p> <p>“We had been forecasting flu incidence accurately several weeks out for years, and right around early 2020, we had a massive breakthrough,” Singh recalls. “We showed we could predict flu 12 to 20 weeks out — then March hit. We said, let’s try to remove the fever levels associated with cold and flu from our observed real time signal. What’s left over is unusual fevers, and we saw hotspots across the country. We observed six years of data and there’d been hot spots, but nothing like we were seeing in early March.”</p> <p>The company quickly made their real-time data available to the public, and on March 14, Singh got on a call with the former New York State health commissioner, the former head of the U.S. Food and Drug Administration, and the man responsible for Taiwan’s successful Covid-19 response.</p> <p>“I said, ‘There’s hotspots everywhere,” Singh recalls. “They’re in New York, around the Northeast, Texas, Michigan. They said, ‘This is interesting, but it doesn’t look credible because we’re not seeing case reports of Covid-19.’ Low and behold, days and weeks later, we saw the Covid cases start building up.”</p> <p><strong>A tool against Covid-19</strong></p> <p>Singh says Kinsa’s data provide an unprecedented look into the way a disease is spreading through a community.</p> <p>“We can predict the entire incidence curve [of flu season] on a city-by-city basis,” Singh says. “The next best model is [about] three weeks out, at a multistate level. It’s not because we’re smarter than others; it’s because we have better data. We found a way to communicate with someone consistently when they’ve just fallen ill.”</p> <p>Kinsa has been working with health departments and research groups around the country to help them interpret the company’s data and react to early warnings of Covid-19’s spread. It’s also helping companies around the country as they begin bringing employees back to offices.</p> <p>Now Kinsa is working on expanding its international presence to help curb infectious diseases on multiple fronts around the world, just like it’s doing in the U.S. The company’s progress promises to help authorities monitor diseases long after Covid-19.</p> <p>“I started Kinsa to create a global, real-time outbreak monitoring and detection system, and now we have predictive power beyond that,” Singh says. “When you know where and when symptoms are starting and how fast their spreading, you can empower local individuals, families, communities, and governments.”</p>
The startup Kinsa, founded by MIT alumnus Inder Singh MBA ’06, SM ’07, uses data generated by its thermometers to detect and track contagious illness earlier than methods that rely on hospital testing. Image: Courtesy of Kinsa

Rewriting the rules of machine-generated art An artificial intelligence tool lets users edit generative adversarial network models with simple copy-and-paste commands. Tue, 18 Aug 2020 15:00:00 -0400 Kim Martineau | MIT Quest for Intelligence <p>Horses don’t normally wear hats, and deep generative models, or GANs, don’t normally follow rules laid out by human programmers. But a new tool developed at MIT lets anyone go into a GAN and tell the model, like a coder, to put hats on the heads of the horses it draws.&nbsp;</p> <p>In&nbsp;<a href=”″>a new study</a>&nbsp;appearing at the&nbsp;<a href=”” target=”_blank”>European Conference on Computer Vision</a> this month, researchers show that the deep layers of neural networks can be edited, like so many lines of code, to generate surprising images no one has seen before.</p> <p>“GANs are incredible artists, but they’re confined to imitating the data they see,” says the study’s lead author,&nbsp;<a href=”” target=”_blank”>David Bau</a>, a PhD student at MIT. “If we can rewrite the rules of a GAN directly, the only limit is human imagination.”</p> <p>Generative adversarial networks, or GANs, pit two neural networks against each other to create hyper-realistic images and sounds. One neural network, the generator, learns to mimic the faces it sees in photos, or the words it hears spoken. A second network, the discriminator, compares the generator’s outputs to the original. The generator then iteratively builds on the discriminator’s feedback until its fabricated images and sounds are convincing enough to pass for real.</p> <p>GANs have captivated artificial intelligence researchers for their ability to create representations that are stunningly lifelike and, at times, deeply bizarre, from a receding cat that&nbsp;<a href=”″ target=”_self”>melts into a pile of fur</a>&nbsp;to a wedding dress standing in a church door as if&nbsp;<a href=”″ target=”_self”>abandoned by the bride</a>. Like most deep learning models, GANs depend on massive datasets to learn from. The more examples they see, the better they get at mimicking them.&nbsp;</p> <p>But the new study suggests that big datasets are not essential. If you understand how a model is wired, says Bau, you can edit the numerical weights in its layers to get the behavior you desire, even if no literal example exists. No dataset? No problem. Just create your own.</p> <p>“We’re like prisoners to our training data,” he says. “GANs only learn patterns that are already in our data. But here I can manipulate a condition in the model to create horses with hats. It’s like editing a genetic sequence to create something entirely new, like inserting the DNA of a firefly into a plant to make it glow in the dark.”</p> <p>Bau was a software engineer at Google, and had&nbsp;led the development&nbsp;of Google Hangouts and Google Image Search, when he decided to go back to school. The field of deep learning was exploding and he wanted to pursue foundational questions in computer science. Hoping to learn how to build transparent systems that would empower users, he joined the lab of MIT Professor&nbsp;<a href=”” target=”_blank”>Antonio Torralba</a>. There, he began probing deep nets and their millions of mathematical operations to understand how they represent the world.</p> <p>Bau showed that you could slice into a GAN, like layer cake, to isolate the artificial neurons that had learned to draw a particular feature, like a tree, and switch them off to make the tree disappear. With this insight, Bau helped create <a href=”;layer=layer4″>GANPaint</a>, a tool that lets users add and remove features like doors and clouds from a picture. In the process, he discovered that GANs have a stubborn streak: they wouldn’t let you draw doors in the sky.</p> <p>“It had some rule that seemed to say, ‘doors don’t go there,’” he says. “That’s fascinating, we thought. It’s like an ‘if’ statement in a program. To me, it was a clear signal that the network had some kind of inner logic.”</p> <p>Over several sleepless nights, Bau ran experiments and picked through the layers of his models for the equivalent of a conditional statement. Finally, it dawned on him. “The neural network has different memory banks that function as a set of general rules, relating one set of learned patterns to another,” he says. “I realized that if you could identify one line of memory, you could write a new memory into it.”&nbsp;</p> <p>In a <a href=”;” target=”_blank”>short version of his ECCV talk</a>, Bau demonstrates how to edit the model and rewrite memories using an intuitive interface he designed. He copies a tree from one image and pastes it into another, placing it, improbably, on a building tower. The model then churns out enough pictures of tree-sprouting towers to fill a family photo album. With a few more clicks, Bau transfers hats from human riders to their horses, and wipes away a reflection of light from a kitchen countertop.</p> <p>The researchers hypothesize that each layer of a deep net acts as an associative memory, formed after repeated exposure to similar examples. Fed enough pictures of doors and clouds, for example, the model learns that doors are entryways to buildings, and clouds float in the sky. The model effectively memorizes a set of rules for understanding the world.</p> <p>The effect is especially striking when GANs manipulate light. When GANPaint added windows to a room, for example, the model automatically added nearby reflections. It’s as if the model had an intuitive grasp of physics and how light should behave on object surfaces. “Even this relationship suggests that associations learned from data can be stored as lines of memory, and not only located but reversed,” says Torralba, the study’s senior author.&nbsp;</p> <p>GAN editing has its limitations. It’s not easy to identify all of the neurons corresponding to objects and animals the model renders, the researchers say. Some rules also appear edit-proof; some changes the researchers tried to make failed to execute.</p> <p>Still, the tool has immediate applications in computer graphics, where GANs are widely studied, and in training expert AI systems to recognize rare features and events through data augmentation. The tool also brings researchers closer to understanding how GANs learn visual concepts with minimal human guidance. If the models learn by imitating what they see, forming associations in the process, they may be a springboard for new kinds of machine learning applications.&nbsp;</p> <p>The study’s other authors are Steven Liu, Tongzhou Wang, and Jun-Yan Zhu.</p>
A new GAN-editing tool developed at MIT allows users to copy features from one set of photos and paste them into another, creating an infinite array of pictures that riff on the new theme — in this case, horses with hats on their heads. Image: David Bau

Data systems that learn to be better Storage tool developed at MIT CSAIL adapts to what its datasets’ users want to search. Mon, 10 Aug 2020 16:05:00 -0400 Adam Conner-Simons | MIT CSAIL <p>Big data has gotten really, really big: By 2025, all the world’s data will add up to <a href=”″>an estimated 175 trillion gigabytes</a>. For a visual, if you stored that amount of data on DVDs, it would stack up tall enough to circle the Earth 222 times.&nbsp;</p> <p>One of the biggest challenges in computing is handling this onslaught of information while still being able to efficiently store and process it. A team from MIT’s <a href=””>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) believe that the answer rests with something called “instance-optimized systems.”&nbsp;&nbsp;</p> <p>Traditional storage and database systems are designed to work for a wide range of applications because of how long it can take to build them — months or, often, several years. As a result, for any given workload such systems provide performance that is good, but usually not the best. Even worse, they sometimes require administrators to painstakingly tune the system by hand to provide even reasonable performance.&nbsp;</p> <p>In contrast, the goal of instance-optimized systems is to build systems that optimize and partially re-organize themselves for the data they store and the workload they serve.&nbsp;</p> <p>“It’s like building a database system for every application from scratch, which is not economically feasible with traditional system designs,” says MIT Professor Tim Kraska.&nbsp;</p> <p>As a first step toward this vision, Kraska and colleagues developed Tsunami and Bao. <a href=””>Tsunami</a> uses machine learning to automatically re-organize a dataset’s storage layout based on the types of queries that its users make. Tests show that it can run queries up to 10 times faster than state-of-the-art systems. What’s more, its datasets can be organized via a series of “learned indexes” that are up to 100 times smaller than the indexes used in traditional systems.&nbsp;</p> <p>Kraska has been exploring the topic of learned indexes for several years, going back to his influential <a href=”″>work with colleagues at Google</a> in 2017.&nbsp;</p> <p>Harvard University Professor Stratos Idreos, who was not involved in the Tsunami project, says that a unique advantage of learned indexes is their small size, which, in addition to space savings, brings substantial performance improvements.</p> <p>“I think this line of work is a paradigm shift that’s going to impact system design long-term,” says Idreos. “I expect approaches based on models will be one of the core components at the heart of a new wave of adaptive systems.”</p> <p><a href=”″>Bao</a>, meanwhile, focuses on improving the efficiency of query optimization through machine learning. A query optimizer rewrites a high-level declarative query to a query plan, which can actually be executed over the data to compute the result to the query. However, often there exists more than one query plan to answer any query; picking the wrong one can cause a query to take days to compute the answer, rather than seconds.&nbsp;</p> <p>Traditional query optimizers take years to build, are very hard to maintain, and, most importantly, do not learn from their mistakes. Bao is the first learning-based approach to query optimization that has been fully integrated into the popular database management system PostgreSQL. Lead author Ryan Marcus, a postdoc in Kraska’s group, says that Bao produces query plans that run up to 50 percent faster than those created by the PostgreSQL optimizer, meaning that it could help to significantly reduce the cost of cloud services, like Amazon’s Redshift, that are based on PostgreSQL.</p> <p>By fusing the two systems together, Kraska hopes to build the first instance-optimized database system that can provide the best possible performance for each individual application without any manual tuning.&nbsp;</p> <p>The goal is to not only relieve developers from the daunting and laborious process of tuning database systems, but to also provide performance and cost benefits that are not possible with traditional systems.</p> <p></p> <p>Traditionally, the systems we use to store data are limited to only a few storage options and, because of it, they cannot provide the best possible performance for a given application. What Tsunami can do is dynamically change the structure of the data storage based on the kinds of queries that it receives and create new ways to store data, which are not feasible with more traditional approaches.</p> <p>Johannes Gehrke, a managing director at Microsoft Research who also heads up machine learning efforts for Microsoft Teams, says that his work opens up many interesting applications, such as doing so-called “multidimensional queries” in main-memory data warehouses. Harvard’s Idreos also expects the project to spur further work on how to maintain the good performance of such systems when new data and new kinds of queries arrive.</p> <p>Bao is short for “bandit optimizer,” a play on words related to the so-called “multi-armed bandit” analogy where a gambler tries to maximize their winnings at multiple slot machines that have different rates of return. The multi-armed bandit problem is commonly found in any situation that has tradeoffs between exploring multiple different options, versus exploiting a single option — from risk optimization to A/B testing.</p> <p>“Query optimizers have been around for years, but they often make mistakes, and usually they don’t learn from them,” says Kraska. “That’s where we feel that our system can make key breakthroughs, as it can quickly learn for the given data and workload what query plans to use and which ones to avoid.”</p> <p>Kraska says that in contrast to other learning-based approaches to query optimization, Bao learns much faster and can outperform open-source and commercial optimizers with as little as one hour of training time.In the future, his team aims to integrate Bao into cloud systems to improve resource utilization in environments where disk, RAM, and CPU time are scarce resources.</p> <p>“Our hope is that a system like this will enable much faster query times, and that people will be able to answer questions they hadn’t been able to answer before,” says Kraska.</p> <p>A related paper about Tsunami was co-written by Kraska, PhD students Jialin Ding and Vikram Nathan, and MIT Professor Mohammad Alizadeh. A paper about Bao was co-written by Kraska, Marcus, PhD students Parimarjan Negi and Hongzi Mao, visiting scientist Nesime Tatbul, and Alizadeh.</p> <p>The work was done as part of the Data System and AI Lab (DSAIL@CSAIL), which is sponsored by Intel, Google, Microsoft, and the U.S. National Science Foundation.&nbsp;</p>
One of the biggest challenges in computing is handling a staggering onslaught of information while still being able to efficiently store and process it.

Shrinking deep learning’s carbon footprint Through innovation in software and hardware, researchers move to reduce the financial and environmental costs of modern artificial intelligence. Fri, 07 Aug 2020 17:00:00 -0400 Kim Martineau | MIT Quest for Intelligence <p>In June, OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can&nbsp;<a href=”″>write creative fiction</a>, translate&nbsp;<a href=”″>legalese into plain English</a>, and&nbsp;<a href=””>answer obscure trivia</a>&nbsp;questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.</p> <p>But it came at a hefty price: at least $4.6 million and&nbsp;<a href=””>355 years in computing time</a>, assuming the model&nbsp;was trained on a standard neural network chip, or GPU.&nbsp;The model’s colossal size — 1,000 times larger than&nbsp;<a href=””>a typical</a>&nbsp;language model — is the main factor in&nbsp;its high cost.</p> <p>“You have to throw a lot more computation at something to get a little improvement in performance,” says&nbsp;<a href=””>Neil Thompson</a>, an MIT researcher who has tracked deep learning’s unquenchable thirst for computing. “It’s unsustainable. We have to find more efficient ways to scale deep learning or develop other technologies.”</p> <p>Some of the excitement over AI’s recent progress has shifted to alarm. In a&nbsp;<a href=”″>study last year</a>, researchers at the University of Massachusetts at Amherst estimated that training&nbsp;a large deep-learning model&nbsp;produces 626,000 pounds of planet-warming carbon dioxide, equal to the lifetime emissions of five cars. As models grow bigger, their demand for computing is outpacing improvements in hardware efficiency.&nbsp;Chips specialized for neural-network processing, like GPUs (graphics processing units) and TPUs (tensor processing units), have offset the demand for more computing, but not by enough.&nbsp;</p> <p>“We need to rethink the entire stack — from software to hardware,” says&nbsp;<a href=””>Aude Oliva</a>, MIT director of the MIT-IBM Watson AI Lab and co-director of the MIT Quest for Intelligence.&nbsp;“Deep learning has made the recent AI revolution possible, but its growing cost in energy and carbon emissions is untenable.”</p> <p>Computational limits have dogged neural networks from their earliest incarnation —&nbsp;<a href=””>the perceptron</a>&nbsp;— in the 1950s.&nbsp;As computing power exploded, and the internet unleashed a tsunami of data, they evolved into powerful engines for pattern recognition and prediction. But each new milestone brought an explosion in cost, as data-hungry models demanded increased computation. GPT-3, for example, trained on half a trillion words and ballooned to 175 billion parameters&nbsp;— the mathematical operations, or weights, that tie the model together —&nbsp;making it 100 times bigger than its predecessor, itself just a year old.</p> <p>In&nbsp;<a href=””>work posted</a>&nbsp;on the pre-print server arXiv,&nbsp;Thompson and his colleagues show that the ability of deep learning models to surpass key benchmarks tracks their nearly exponential rise in computing power use. (Like others seeking to track AI’s carbon footprint, the team had to guess at many models’ energy consumption due to a lack of reporting requirements). At this rate, the researchers argue, deep nets will survive only if they, and the hardware they run on, become radically more efficient.</p> <p><strong>Toward leaner, greener algorithms</strong></p> <p>The human perceptual system is extremely efficient at using data. Researchers have borrowed this idea for recognizing actions in video and in real life to make models more compact.&nbsp;In a paper at the&nbsp;<a href=””>European Conference on Computer Vision</a> (ECCV) in August, researchers at the&nbsp;<a href=””>MIT-IBM Watson AI Lab</a>&nbsp;describe a method for unpacking a scene from a few glances, as humans do, by cherry-picking the most relevant data.</p> <p>Take a video clip of someone making a sandwich. Under the method outlined in the paper, a policy network strategically picks frames of the knife slicing through roast beef, and meat being stacked on a slice of bread, to represent at high resolution. Less-relevant frames are skipped over or represented at lower resolution. A second model then uses the abbreviated CliffsNotes version of the movie to label it “making a sandwich.” The approach leads to faster video classification at half the computational cost as the next-best model, the researchers say.</p> <p>“Humans don’t pay attention to every last detail — why should our models?” says the study’s senior author,&nbsp;<a href=””>Rogerio Feris</a>, research manager at the MIT-IBM Watson AI Lab. “We can use machine learning to adaptively select the right data, at the right level of detail, to make deep learning models more efficient.”</p> <p>In a complementary approach, researchers are using deep learning itself to design more economical models through an automated process known as neural architecture search.&nbsp;<a href=””>Song Han</a>, an assistant professor at MIT, has used automated search to design models with fewer weights, for language understanding and scene recognition, where picking out looming obstacles quickly is acutely important in driving applications.&nbsp;</p> <p>In&nbsp;<a href=””>a paper at ECCV</a>, Han and his colleagues propose a model architecture for three-dimensional scene&nbsp;recognition that can spot safety-critical details like road signs, pedestrians, and cyclists with relatively less computation. They used&nbsp;an evolutionary-search algorithm to evaluate 1,000 architectures before settling on a model they say is three times faster and uses eight times less computation than the next-best method.&nbsp;</p> <p>In&nbsp;<a href=””>another recent paper</a>, they use evolutionary search within an augmented designed space to find the most efficient architectures for machine translation on a specific device, be it a GPU, smartphone, or tiny&nbsp;Raspberry Pi.&nbsp;Separating the search and training process leads to huge reductions in computation, they say.</p> <p>In a third approach, researchers are probing the essence of deep nets to see if it might be possible to&nbsp;train a small part of even hyper-efficient networks like those above.&nbsp;Under their proposed <a href=”″>lottery ticket hypothesis</a>, PhD student&nbsp;<a href=””>Jonathan Frankle</a>&nbsp;and MIT Professor&nbsp;<a href=””>Michael Carbin</a>&nbsp;proposed that within each model lies a tiny subnetwork that could have been trained in isolation with as few as one-tenth as many weights — what they call a “winning ticket.”&nbsp;</p> <p>They showed that an algorithm could retroactively&nbsp;find these winning subnetworks in&nbsp;small image-classification models. Now,&nbsp;<a href=”″>in a paper</a>&nbsp;at the International Conference on Machine Learning (ICML), they show that the algorithm finds winning tickets in large models, too; the models just need to be rewound to an early, critical point in training when the order of the training data no longer&nbsp;influences the training outcome.&nbsp;</p> <p>In less than two years, the lottery ticket idea has been cited more than&nbsp;<a href=”;hl=en”>more than 400 times</a>, including by Facebook researcher Ari Morcos, who has&nbsp;<a href=””>shown</a>&nbsp;that winning tickets can be transferred from one vision task to another, and that winning tickets exist in language and reinforcement learning models, too.&nbsp;</p> <p>“The standard explanation for why we need such large networks is that overparameterization aids the learning process,” says Morcos. “The lottery ticket hypothesis disproves that — it’s all about finding an appropriate starting point. The big downside, of course, is that, currently, finding these ‘winning’ starting points requires training the full overparameterized network anyway.”</p> <p>Frankle says he’s hopeful that an efficient way to find winning tickets will be found. In the meantime, recycling those winning tickets, as Morcos suggests, could lead to big savings.</p> <p><strong>Hardware designed for efficient deep net algorithms</strong></p> <p>As deep nets push classical computers to the limit, researchers are pursuing alternatives, from optical computers that transmit and store data with photons instead of electrons, to quantum computers, which have the potential to increase computing power exponentially by representing data in multiple states at once.</p> <p>Until a new paradigm emerges, researchers have focused on adapting the modern chip to the demands of deep learning. The trend began with&nbsp;the discovery that video-game graphical chips, or GPUs, could turbocharge deep-net training with their ability to perform massively parallelized matrix computations. GPUs are now one of the workhorses of modern AI, and have spawned new ideas for boosting deep net efficiency through specialized hardware.&nbsp;</p> <p>Much of this work hinges on finding ways to&nbsp;store and reuse data locally, across the chip’s processing cores,&nbsp;rather than waste time and energy shuttling data to and from&nbsp;a designated memory site. Processing data locally not only speeds up model training but improves inference, allowing deep learning applications to run more smoothly on smartphones and other mobile devices.</p> <p><a href=””>Vivienne Sze</a>, a professor at MIT, has literally written&nbsp;<a href=”″>the book</a>&nbsp;on efficient deep nets. In collaboration with book co-author Joel Emer, an MIT professor and researcher at NVIDIA, Sze has designed a chip that’s flexible enough to process the widely-varying shapes of both large and small deep learning models. Called&nbsp;<a href=”″>Eyeriss 2</a>, the chip uses 10 times less energy than a mobile GPU.</p> <p>Its versatility lies in its on-chip network, called a hierarchical mesh, that adaptively reuses data and adjusts to the bandwidth requirements of different deep learning models. After reading from memory, it reuses the data across as many processing elements as possible to minimize data transportation costs and maintain high throughput.&nbsp;</p> <p>“The goal is to translate small and sparse networks into energy savings and fast inference,” says Sze. “But the hardware should be flexible enough to also efficiently support large and dense deep neural networks.”</p> <p>Other hardware innovators are focused on reproducing the brain’s energy efficiency. Former Go world champion Lee Sedol may have lost his title to a computer, but his performance&nbsp;<a href=””>was fueled</a>&nbsp;by a mere 20 watts of power. AlphaGo, by contrast, burned an estimated megawatt of energy, or 500,000 times more.</p> <p>Inspired by the brain’s frugality, researchers are experimenting with replacing the binary, on-off switch of classical transistors with analog devices that mimic the way that synapses in the brain grow stronger and weaker during learning and forgetting.</p> <p>An electrochemical device, developed at MIT and recently&nbsp;<a href=”″>published in <em>Nature Communications</em></a>, is modeled after the way resistance between two neurons grows or subsides as calcium, magnesium or potassium ions flow across the synaptic membrane dividing them.&nbsp;The device uses the flow of protons — the smallest and fastest ion in solid state — into and out of a crystalline lattice of tungsten trioxide to tune its resistance along a continuum, in an analog fashion.</p> <p>“Even though the device is not yet optimized, it gets to the order of energy consumption per unit area per unit change in conductance that’s close to that in the brain,” says&nbsp;the study’s senior author, <a href=””>Bilge Yildiz</a>, a professor at MIT.</p> <p>Energy-efficient algorithms and hardware can shrink AI’s environmental impact. But there are other reasons to innovate, says Sze, listing them off: Efficiency will allow computing to move from data centers to edge devices like smartphones, making AI accessible to more people around the world; shifting computation from the cloud to personal devices reduces the flow, and potential leakage, of sensitive data; and processing data on the edge eliminates transmission costs, leading to faster inference with a shorter reaction time, which is key for interactive driving and augmented/virtual reality applications.</p> <p>“For all of these reasons, we need to embrace efficient AI,” she says.</p>
Deep learning has driven much of the recent progress in artificial intelligence, but as demand for computation and energy to train ever-larger models increases, many are raising concerns about the financial and environmental costs. To address the problem, researchers at MIT and the MIT-IBM Watson AI Lab are experimenting with ways to make software and hardware more energy efficient, and in some cases, more like the human brain. Image: Niki Hinkle/MIT Spectrum

Looking into the black box Recent advances give theoretical insight into why deep learning networks are successful. Mon, 27 Jul 2020 16:45:01 -0400 Sabbi Lall | McGovern Institute for Brain Research <p>Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers <a href=””>recently reviewed</a> their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.</p> <p>“Deep learning was in some ways an accidental discovery,” explains Tommy Poggio, investigator at the McGovern Institute for Brain Research, director of the Center for Brains, Minds, and Machines (CBMM), and the Eugene McDermott Professor in Brain and Cognitive Sciences. “We still do not understand why it works. A theoretical framework is taking form, and I believe that we are now close to a satisfactory theory. It is time to stand back and review recent insights.”</p> <p><strong>Climbing data mountains</strong></p> <p>Our current era is marked by a superabundance of data — data from inexpensive sensors of all types, text, the internet, and large amounts of genomic data being generated in the life sciences. Computers nowadays ingest these multidimensional datasets, creating a set of problems dubbed the “curse of dimensionality” by the late mathematician Richard Bellman.</p> <p>One of these problems is that representing a smooth, high-dimensional function requires an astronomically large number of parameters. We know that deep neural networks are particularly good at learning how to represent, or approximate, such complex data, but why? Understanding why could potentially help advance deep learning applications.</p> <p>“Deep learning is like electricity after Volta discovered the battery, but before Maxwell,” explains Poggio, who is the founding scientific advisor of The Core, MIT Quest for Intelligence, and an investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. “Useful applications were certainly possible after Volta, but it was Maxwell’s theory of electromagnetism, this deeper understanding that then opened the way to the radio, the TV, the radar, the transistor, the computers, and the internet.”</p> <p>The theoretical treatment by Poggio, Andrzej Banburski, and Qianli Liao points to why deep learning might overcome data problems such as “the curse of dimensionality.” Their approach starts with the observation that many natural structures are hierarchical. To model the growth and development of a tree doesn’t require that we specify the location of every twig. Instead, a model can use local rules to drive branching hierarchically. The primate visual system appears to do something similar when processing complex data. When we look at natural images — including trees, cats, and faces — the brain successively integrates local image patches, then small collections of patches, and then collections of collections of patches.&nbsp;</p> <p>“The physical world is compositional — in other words, composed of many local physical interactions,” explains Qianli Liao, an author of the study, and a graduate student in the Department of Electrical Engineering and Computer Science and a member of the CBMM. “This goes beyond images. Language and our thoughts are compositional, and even our nervous system is compositional in terms of how neurons connect with each other. Our review explains theoretically why deep networks are so good at representing this complexity.”</p> <p>The intuition is that a hierarchical neural network should be better at approximating a compositional function than a single “layer” of neurons, even if the total number of neurons is the same. The technical part of their work identifies what “better at approximating” means and proves that the intuition is correct.</p> <p><strong>Generalization puzzle</strong></p> <p>There is a second puzzle about what is sometimes called the unreasonable effectiveness of deep networks. Deep network models often have far more parameters than data to fit them, despite the mountains of data we produce these days. This situation ought to lead to what is called “overfitting,” where your current data fit the model well, but any new data fit the model terribly. This is dubbed poor generalization in conventional models. The conventional solution is to constrain some aspect of the fitting procedure. However, deep networks do not seem to require this constraint. Poggio and his colleagues prove that, in many cases, the process of training a deep network implicitly “regularizes” the solution, providing constraints.</p> <p>The work has a number of implications going forward. Though deep learning is actively being applied in the world, this has so far occurred without a comprehensive underlying theory.<strong> </strong>A theory of deep learning that explains why and how deep networks work, and what their limitations are, will likely allow development of even much more powerful learning approaches.</p> <p>“In the long term, the ability to develop and build better intelligent machines will be essential to any technology-based economy,” explains Poggio. “After all, even in its current — still highly imperfect — state, deep learning is impacting, or about to impact, just about every aspect of our society and life.”</p>
Neural network

Commentary: America must invest in its ability to innovate Presidents of MIT and Indiana University urge America’s leaders to support bipartisan innovation bill. Fri, 24 Jul 2020 09:52:12 -0400 Zach Winn | MIT News Office <p>In July of 1945, in an America just beginning to establish a postwar identity, former MIT vice president Vannevar Bush set forth a vision that guided the country to decades of scientific dominance and economic prosperity. Bush’s report to the president of the United States, <a href=””>“Science: The Endless Frontier,”</a> called on the government to support basic research in university labs. Its ideas, including the creation of the National Science Foundation (NSF), are credited with helping to make U.S. scientific and technological innovation the envy of the world.</p> <p>Today, America’s lead in science and technology is being challenged as never before, write MIT President L. Rafael Reif and Indiana University President Michael A. McRobbie in <a href=””>an op-ed</a> published today by <em>The Chicago Tribune</em>. They describe a “triple challenge” of bolder foreign competitors, faster technological change, and a merciless race to get from lab to market.</p> <p>The government’s decision to adopt Bush’s ideas was bold and controversial at the time, and similarly bold action is needed now, they write.</p> <p>“The U.S. has the fundamental building blocks for success, including many of the world’s top research universities <a href=”″>that are at the forefront of the fight against COVID-19</a>,” reads the op-ed. “But without a major, sustained funding commitment, a focus on key technologies and a faster system for transforming discoveries into new businesses, products and quality jobs, in today’s arena, America will not prevail.”</p> <p>McRobbie and Reif believe <a href=”” target=”_blank”>a bipartisan bill</a> recently introduced in both chambers of Congress can help America’s innovation ecosystem meet the challenges of the day. Named the “Endless Frontier Act,” the bill would support research focused on advancing key technologies like artificial intelligence and quantum computing. It does not seek to alter or replace the NSF, but to “create new strength in parallel,” they write.&nbsp;</p> <p>The bill would also create scholarships, fellowships, and other forms of assistance to help build an American workforce ready to develop and deploy the latest technologies. And, it would facilitate experiments to help commercialize new ideas more quickly.</p> <p>“Today’s leaders have the opportunity to display the far-sighted vision their predecessors showed after World War II — to expand and shape of our institutions, and to make the investments to adapt to a changing world,” Reif and McRobbie write.</p> <p></p> <p>Both university presidents acknowledge that measures such as the Endless Frontier Act require audacious choices. But if leaders take the right steps now, they write, those choices will seem, in retrospect, obvious and wise.</p> <p>“Now as then, our national prosperity hinges on the next generation of technical triumphs,” Reif and Mcrobbie write. “Now as then, that success is not inevitable, and it will not come by chance. But with focused funding and imaginative policy, we believe it remains in reach.”</p>

Tackling the misinformation epidemic with “In Event of Moon Disaster” New website from the MIT Center for Advanced Virtuality rewrites an important moment in history to educate the public on the dangers of deepfakes. Mon, 20 Jul 2020 05:00:00 -0400 MIT Open Learning <p>Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “<a href=””>In Event of Moon Disaster</a>.”</p> <p>This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be.&nbsp;</p> <p>“Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it’s become a crucial issue of our time,” says D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality, part of MIT Open Learning. “With this project — and a course curriculum on misinformation being built around it — our powerfully talented XR Creative Director Francesca Panetta is pushing forward one of the center’s broad aims: using AI and technologies of virtuality to support creative expression and truth.”</p> <p>Alongside the film, <a href=”” target=”_blank”></a> features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources.&nbsp;</p> <p>“This alternative history shows how new technologies can obfuscate the truth around us, encouraging our audience to think carefully about the media they encounter daily,” says Panetta.</p> <p>Also part of the launch is a new documentary, “To Make a Deepfake,” a 30-minute film by <em>Scientific American</em>, that uses “In Event of Moon Disaster” as a jumping-off point to explain the technology behind AI-generated media. The documentary features prominent scholars and thinkers on the state of deepfakes, on the stakes for the spread of misinformation and the twisting of our digital reality, and on the future of truth.</p> <p>The project is supported by the MIT Open Documentary Lab and the Mozilla Foundation, which awarded “In Event of Moon Disaster” a Creative Media Award last year. These awards are part of Mozilla’s mission to realize more trustworthy AI in consumer technology. <a href=”;userId=5427979&amp;signature=3bd187fb1c19bc28″>The latest cohort of awardees</a> uses art and advocacy to examine AI’s effect on media and truth.</p> <p>Says J. Bob Alotta, Mozilla’s vice president of global programs: “AI plays a central role in consumer technology today — it curates our news, it recommends who we date, and it targets us with ads. Such a powerful technology should be demonstrably worthy of trust, but often it is not. Mozilla’s Creative Media Awards draw attention to this, and also advocate for more privacy, transparency, and human well-being in AI.”&nbsp;</p> <p>“In Event of Moon Disaster” <a href=”″ target=”_self”>previewed last fall</a> as a physical art installation at the International Documentary Film Festival Amsterdam, where it won the Special Jury Prize for Digital Storytelling; it was selected for the 2020 Tribeca Film Festival and Cannes XR. The new website is the project’s global digital launch, making the film and associated materials available for free to all audiences.</p> <p>The past few months have seen the world move almost entirely online: schools, talk shows, museums, election campaigns, doctor’s appointments — all have made a rapid transition to virtual. When every interaction we have with the world is seen through a digital filter, it becomes more important than ever to learn how to distinguish between authentic and manipulated media.&nbsp;</p> <p>“It’s our hope that this project will encourage the public to understand that manipulated media plays a significant role in our media landscape,” says co-director Burgund, “and that, with further understanding and diligence, we can all reduce the likelihood of being unduly influenced by it.”</p>
Using sophisticated AI and machine learning technologies, the “In Event of Moon Disaster” team merged Nixon’s face with the movements of an actor reading a speech the former president never actually delivered. Image: MIT Center for Advanced Virtuality

Faculty receive funding to develop artificial intelligence techniques to combat Covid-19 Digital Transformation Institute awards $5.4 million to top researchers to steer how society responds to the pandemic. Fri, 17 Jul 2020 15:30:01 -0400 School of Engineering | MIT Schwarzman College of Computing <p>Artificial intelligence has the power to help put an end to the Covid-19 pandemic. Not only can techniques of machine learning and natural language processing be used to track and report Covid-19 infection rates, but other AI techniques can also be used to make smarter decisions about everything from when states should reopen to how vaccines are designed. Now, MIT researchers working on seven groundbreaking projects on Covid-19 will be funded to more rapidly develop and apply novel AI techniques to improve medical response and slow the pandemic spread.</p> <p>Earlier this year, the <a href=”” target=”_blank”> Digital Transformation Institute</a> ( DTI) formed, with the goal of attracting the world’s leading scientists to join in a coordinated and innovative effort to advance the digital transformation of businesses, governments, and society. The consortium is dedicated to accelerating advances in research and combining machine learning, artificial intelligence, internet of things, ethics, and public policy — for enhancing societal outcomes. MIT, under the auspices of the School of Engineering, joined the DTI consortium, along with, Microsoft Corporation, the University of Illinois at Urbana-Champaign, the University of California at Berkeley, Princeton University, the University of Chicago, Carnegie Mellon University, and, most recently, Stanford University.</p><p>The initial call for project proposals aimed to embrace the challenge of abating the spread of Covid-19 and advance the knowledge, science, and technologies for mitigating the impact of pandemics using AI. Out of a total of 200 research proposals, 26 projects were selected and awarded $5.4 million to continue AI research to mitigate the impact of Covid-19 in the areas of medicine, urban planning, and public policy.</p> <p>The <a href=””>first round of grant recipients was recently announced</a>, and among them are five projects led by MIT researchers from across the Institute: Saurabh Amin, associate professor of civil and environmental engineering; Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management; Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of the MIT Institute for Data, Systems, and Society; David Gifford, professor of biological engineering and of electrical engineering and computer science; and Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, head of the Department of Electrical Engineering and Computer Science, and deputy dean of academics for MIT Schwarzman College of Computing.</p> <p>“We are proud to be a part of this consortium, and to collaborate with peers across higher education, industry, and health care to collectively combat the current pandemic, and to mitigate risk associated with future pandemics,” says Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “We are so honored to have the opportunity to accelerate critical Covid-19 research through resources and expertise provided by the DTI.”</p> <p>Additionally, three MIT researchers will collaborate with principal investigators from other institutions on projects blending health and machine learning. Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science, join Ziv Bar-Joseph from Carnegie Mellon University for a project using machine learning to seek treatment for Covid-19. Aleksander Mądry, professor of computer science in the Department of Electrical Engineering and Computer Science, joins Sendhil Mullainathan of the University of Chicago for a project using machine learning to support emergency triage of pulmonary collapse due to Covid-19 on the basis of X-rays.</p> <p>Bertsimas’s project develops automated, interpretable, and scalable decision-making systems based on machine learning and artificial intelligence to support clinical practices and public policies as they respond to the Covid-19 pandemic. When it comes to reopening the economy while containing the spread of the pandemic, Ozdaglar’s research provides quantitative analyses of targeted interventions for different groups that will guide policies calibrated to different risk levels and interaction patterns. Amin is investigating the design of actionable information and effective intervention strategies to support safe mobilization of economic activity and reopening of mobility services in urban systems. Dahleh’s research innovatively uses machine learning to determine how to safeguard schools and universities against the outbreak. Gifford was awarded funding for his project that uses machine learning to develop more informed vaccine designs with improved population coverage, and to develop models of Covid-19 disease severity using individual genotypes.</p> <p>“The enthusiastic support of the distinguished MIT research community is making a huge contribution to the rapid&nbsp;start and significant progress of the Digital Transformation Institute,” says Thomas Siebel, chair and CEO of “It is a privilege to be working with such an accomplished team.”</p> <p>The following projects are the MIT recipients of the inaugural DTI Awards:&nbsp;</p> <p>”Pandemic Resilient Urban Mobility: Learning Spatiotemporal Models for Testing, Contact Tracing, and Reopening Decisions” — Saurabh Amin, associate professor of civil and environmental engineering; and Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science</p> <p>”Effective Cocktail Treatments for SARS-CoV-2 Based on Modeling Lung Single Cell Response Data” — Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science (Principal investigator: Ziv Bar-Joseph of Carnegie Mellon University)</p> <p>”Toward Analytics-Based Clinical and Policy Decision Support to Respond to the Covid-19 Pandemic” — Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management and associate dean for business analytics; and Alexandre Jacquillat, assistant professor of operations research and statistics</p> <p>”Reinforcement Learning to Safeguard Schools and Universities Against the Covid-19 Outbreak” — Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of MIT Institute for Data, Systems, and Society; and Peko Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of engineering</p> <p>”Machine Learning-Based Vaccine Design and HLA Based Risk Prediction for Viral Infections” — David Gifford, professor of biological engineering and of electrical engineering and computer science</p> <p>”Machine Learning Support for Emergency Triage of Pulmonary Collapse in Covid-19″ — Aleksander Mądry,<em> </em>professor of computer science in the Department of Electrical Engineering and Computer Science (Principal investigator: Sendhil Mullainathan of the University of Chicago)</p> <p>”Targeted Interventions in Networked and Multi-Risk SIR Models: How to Unlock the Economy During a Pandemic” — Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, department head of electrical engineering and computer science, and deputy dean of academics for MIT Schwarzman College of Computing; and Daron Acemoglu, Institute Professor</p>
Out of a total of 200 research proposals, 26 projects were selected and awarded $5.4 million to continue AI research to mitigate the impact of Covid-19 in the areas of medicine, urban planning, and public policy.

Letting robots manipulate cables Robotic gripper with soft sensitive fingers developed at MIT can handle cables with unprecedented dexterity. Mon, 13 Jul 2020 07:00:00 -0400 Rachel Gordon | MIT CSAIL <p>For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.</p> <p>Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and from the MIT Department of Mechanical Engineering pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s <a href=”” target=”_blank”>new system</a> uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.</p> <p>One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.&nbsp;</p> <p>The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based <a href=”″ target=”_self”>“GelSight” sensors</a>, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.</p> <p>The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.</p> <p>When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.</p> <p>As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.&nbsp;</p> <p>“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”&nbsp;</p><p><strong>String me along</strong>&nbsp;</p><p>Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).&nbsp;&nbsp;</p><p>This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.&nbsp;</p> <p>What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile<em> </em>sensors. The gripper’s joints are also flexible — protecting them from potential impact.&nbsp;</p> <p>The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.&nbsp;</p> <p>When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many regrasps to finish the task.&nbsp;</p> <p><strong>Looking ahead&nbsp;</strong></p> <p>The team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.&nbsp;</p> <p>In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.</p> <p>Yu She wrote the paper alongside MIT PhD students Shaoxiong Wang, Siyuan Dong, and Neha Sunil; Alberto Rodriguez,&nbsp;MIT associate professor of mechanical engineering; and Edward Adelson, the <span class=”person__info__def”>John and Dorothy Wilson Professor in the MIT Department of Brain and Cognitive Sciences</span>.&nbsp;</p> <p>This work was supported by the Amazon Research Awards, the Toyota Research Institute, and the Office of Naval Research.</p>
The system uses a pair of soft robotic grippers with high-resolution tactile sensors to successfully manipulate freely moving cables. Photo courtesy of MIT CSAIL.

Exploring interactions of light and matter Juejun Hu pushes the frontiers of optoelectronics for biological imaging, communications, and consumer electronics. Tue, 30 Jun 2020 23:59:59 -0400 David L. Chandler | MIT News Office <p>Growing up in a small town in Fujian province in southern China, Juejun Hu was exposed to engineering from an early age. His father, trained as a mechanical engineer, spent his career working first in that field, then in electrical engineering, and then civil engineering.</p><p>“He gave me early exposure to the field. He brought me books and told me stories of interesting scientists and scientific activities,” Hu recalls. So when it came time to go to college — in China students have to choose their major before enrolling — he picked materials science, figuring that field straddled his interests in science and engineering. He pursued that major at Tsinghua University in Beijing.</p><p>He never regretted that decision. “Indeed, it’s the way to go,” he says. “It was a serendipitous choice.” He continued on to a doctorate in materials science at MIT, and then spent four and a half years as an assistant professor at the University of Delaware before joining the MIT faculty. Last year, Hu earned tenure as an associate professor in MIT’s Department of Materials Science and Engineering.</p><p>In his work at the Institute, he has focused on optical and photonic devices, whose applications include improving high-speed communications, observing the behavior of molecules, designing better medical imaging systems, and developing innovations in consumer electronics such as display screens and sensors.</p><p>“I got fascinated with light,” he says, recalling how he began working in this field. “It has such a direct impact on our lives.”</p><p>Hu is now developing devices to transmit information at very high rates, for data centers or high-performance computers. This includes work on devices called optical diodes or optical isolators, which allow light to pass through only in one direction, and systems for coupling light signals into and out of photonic chips.</p><p>Lately, Hu has been focusing on applying machine-learning methods to improve the performance of optical systems. For example, he has developed an algorithm that improves the sensitivity of a spectrometer, a device for analyzing the chemical composition of materials based on how they emit or absorb different frequencies of light. The new approach made it possible to shrink a device that ordinarily requires bulky and expensive equipment down to the scale of a computer chip, by improving its ability to overcome random noise and provide a clean signal.</p><p>The miniaturized spectrometer makes it possible to analyze the chemical composition of individual molecules with something “small and rugged, to replace devices that are large, delicate, and expensive,” he says.</p><p>Much of his work currently involves the use of metamaterials, which don’t occur in nature and are synthesized usually as a series of ultrathin layers, so thin that they interact with wavelengths of light in novel ways. These could lead to components for biomedical imaging, security surveillance, and sensors on consumer electronics, Hu says. Another project he’s been working on involved developing a kind of optical zoom lens based on metamaterials, which uses no moving parts.</p><p>Hu is also pursuing ways to make photonic and photovoltaic systems that are flexible and stretchable rather than rigid, and to make them lighter and more compact. This could &nbsp;allow for installations in places that would otherwise not be practical. “I’m always looking for new designs to start a new paradigm in optics, [to produce] something that’s smaller, faster, better, and lower cost,” he says.</p><p>Hu says the focus of his research these days is mostly on amorphous materials — whose atoms are randomly arranged as opposed to the orderly lattices of crystal structures — because crystalline materials have been so well-studied and understood. When it comes to amorphous materials, though, “our knowledge is amorphous,” he says. “There are lots of new discoveries in the field.”</p><p>Hu’s wife, Di Chen, whom he met when they were both in China, works in the financial industry. They have twin daughters, Selena and Eos, who are 1 year old, and a son Helius, age 3. Whatever free time he has, Hu says, he likes to spend doing things with his kids.</p><p>Recalling why he was drawn to MIT, he says, “I like this very strong engineering culture.” He especially likes MIT’s strong system of support for bringing new advances out of the lab and into real-world application. “This is what I find really useful.” When new ideas come out of the lab, “I like to see them find real utility,” he adds.</p>
MIT professor Juejun Hu specializes in optical and photonic devices, whose applications include improving high-speed communications, observing the behavior of molecules, and developing innovations in consumer electronics. Image: Denis Paiste

The MIT Press and UC Berkeley launch Rapid Reviews: COVID-19 The new open access, rapid-review overlay journal aims to combat misinformation in Covid-19 research. Mon, 29 Jun 2020 15:35:01 -0400 MIT Press <p><a href=”” target=”_blank”>The MIT Press</a> has announced the launch of <a href=”” target=”_blank”><em>Rapid Reviews: COVID-19</em></a> (<em>RR:C19</em>), an open access, rapid-review overlay journal that will accelerate peer review of Covid-19-related research and deliver real-time, verified scientific information that policymakers and health leaders can use.</p> <p>Scientists and researchers are working overtime to understand the SARS-CoV-2 virus and are producing an unprecedented amount of preprint scholarship that is publicly available online but has not been vetted yet by peer review for accuracy. Traditional peer review can take four or more weeks to complete, but <em>RR:C19’s </em>editorial team, led by Editor-in-Chief Stefano M. Bertozzi, professor of health policy and management and dean emeritus of the <a href=””>School of Public Health</a> at the University of California at Berkeley, will produce expert reviews in a matter of days.</p> <p>Using artificial intelligence tools, a global team will identify promising scholarship in preprint repositories, commission expert peer reviews, and publish the results on an open access platform in a completely transparent process. The journal will strive for disciplinary and geographic breadth, sourcing manuscripts from all regions and across a wide variety of fields, including medicine; public health; the physical, biological, and chemical sciences; the social sciences; and the humanities. <em>RR:C19 </em>will also provide a new publishing option for revised papers that are positively reviewed.</p> <p>Amy Brand, director of the MIT Press sees the no-cost open access model as a way to increase the impact of global research and disseminate high-quality scholarship. “Offering a peer-reviewed model on top of preprints will bring a level of diligence that clinicians, researchers, and others worldwide rely on to make sound judgments about the current crisis and its amelioration,” says Brand. “The project also aims to provide a proof-of-concept for new models of peer-review and rapid publishing for broader applications.”</p> <p>Made possible by a $350,000 grant from the Patrick J. McGovern Foundation and hosted on <a href=””>PubPub</a>, an open-source publishing platform from the Knowledge Futures Group for collaboratively editing and publishing journals, monographs, and other open access scholarly content, <em>RR:C19</em> will limit the spread of misinformation about Covid-19, according to Bertozzi.</p> <p>“There is an urgent need to validate — or debunk — the rapidly growing volume of Covid-19-related manuscripts on preprint servers,” explains Bertozzi. “I’m excited to be working with the MIT Press, the Patrick J. McGovern Foundation, and the Knowledge Futures Group to create a novel publishing model that has the potential to more efficiently translate important scientific results into action. We are also working with <a href=””>COVIDScholar</a>, an initiative of UC Berkeley and Lawrence Berkeley National Lab, to create unique AI/machine learning tools to support the review of hundreds of preprints per week.”</p> <p>“This project signals a breakthrough in academic publishing, bringing together urgency and scientific rigor so the world’s researchers can rapidly disseminate new discoveries that we can trust,” says Vilas Dhar, trustee of the Patrick J. McGovern Foundation. “We are confident the <em>RR:C19 </em>journal will quickly become an invaluable resource for researchers, public health officials, and healthcare providers on the frontline of this pandemic. We’re also excited about the potential for a long-term transformation in how we evaluate and share research across all scientific disciplines.”</p> <p>On the collaboration around this new journal, Travis Rich, executive director of the Knowledge Futures Group<strong> </strong>notes, “At a moment when credibility is increasingly crucial to the well-being of society, we’re thrilled to be partnering with this innovative journal to expand the idea of reviews as first-class research objects, both on PubPub and as a model for others.</p> <p><em>RR:C19</em> will publish its first reviews in July 2020 and is actively recruiting potential reviewers and contributors. To learn more about this project and its esteemed editorial board, visit<a href=””></a>.</p>
Rapid Reviews: COVID-19 (RR:C19) is an open access, rapid-review overlay journal that will accelerate peer review of Covid-19-related research.

Improving global health equity by helping clinics do more with less The startup macro-eyes uses artificial intelligence to improve vaccine delivery and patient scheduling. Thu, 25 Jun 2020 23:59:59 -0400 Zach Winn | MIT News Office <p>More children are being vaccinated around the world today than ever before, and the prevalence of many vaccine-preventable diseases has dropped over the last decade. Despite these encouraging signs, however, the availability of essential vaccines has stagnated globally in recent years, according the World Health Organization.</p><p>One problem, particularly in low-resource settings, is the difficulty of predicting how many children will show up for vaccinations at each health clinic. This leads to vaccine shortages, leaving children without critical immunizations, or to surpluses that can’t be used.</p><p>The startup macro-eyes is seeking to solve that problem with a vaccine forecasting tool that leverages a unique combination of real-time data sources, including new insights from front-line health workers. The company says the tool, named the Connected Health AI Network (CHAIN), was able to reduce vaccine wastage by 96 percent across three regions of Tanzania. Now it is working to scale that success across Tanzania and Mozambique.</p><p>“Health care is complex, and to be invited to the table, you need to deal with missing data,” says macro-eyes Chief Executive Officer Benjamin Fels, who co-founded the company with Suvrit Sra, the Esther and Harold E. Edgerton Career Development Associate Professor at MIT. “If your system needs age, gender, and weight to make predictions, but for one population you don’t have weight or age, you can’t just say, ‘This system doesn’t work.’ Our feeling is it has to be able to work in any setting.”</p><p>The company’s approach to prediction is already the basis for another product, the patient scheduling platform Sibyl, which has analyzed over 6 million hospital appointments and reduced wait times by more than 75 percent at one of the largest heart hospitals in the U.S. Sibyl’s predictions work as part of CHAIN’s broader forecasts.</p><p>Both products represent steps toward macro-eyes’ larger goal of transforming health care through artificial intelligence. And by getting their solutions to work in the regions with the least amount of data, they’re also advancing the field of AI.</p><p>“The state of the art in machine learning will result from confronting fundamental challenges in the most difficult environments in the world,” Fels says. “Engage where the problems are hardest, and AI too will benefit: [It will become] smarter, faster, cheaper, and more resilient.”</p><p><strong>Defining an approach</strong></p><p>Sra and Fels first met about 10 years ago when Fels was working as&nbsp;an algorithmic trader&nbsp;for a hedge fund and Sra was&nbsp;a visiting faculty member&nbsp;at the University of California at Berkeley. The pair’s experience crunching numbers in different industries alerted them to a shortcoming in health care.</p> <p>“A question that became an obsession to me was, ‘Why were financial markets almost entirely determined by machines — by algorithms — and health care the world over is probably the least algorithmic part of anybody’s life?’” Fels recalls. “Why is health care not more data-driven?”</p> <p>Around 2013, the co-founders began building machine-learning algorithms that measured similarities between patients to better inform treatment plans at Stanford School of Medicine and another large academic medical center in New York. It was during that early work that the founders laid the foundation of the company’s approach.</p><p>“There are themes we established at Stanford that remain today,” Fels says. “One is [building systems with] humans in the loop: We’re not just learning from the data, we’re also learning from the experts. The other is multidimensionality. We’re not just looking at one type of data; we’re looking at 10 or 15 types, [including] images, time series, information about medication, dosage, financial information, how much it costs the patient or hospital.”</p><p>Around the time the founders began working with Stanford, Sra joined MIT’s Laboratory for Information and Decision Systems (LIDS) as a principal research scientist. He would go on to become a faculty member in the Department of Electrical Engineering and Computer Science and MIT’s Institute for Data, Systems, and Society (IDSS). The mission of IDSS, to advance fields including data science and to use those advances to improve society, aligned well with Sra’s mission at macro-eyes.</p><p>“Because of that focus [on impact] within IDSS, I find it my focus to try to do AI for social good,’ Sra says. “The true judgment of success is how many people did we help? How could we improve access to care for people, wherever they may be?”</p> <p>In 2017, macro-eyes received a small grant from the Bill and Melinda Gates Foundation to explore the possibility of using data from front-line health workers to build a predictive supply chain for vaccines. It was the beginning of a relationship with the Gates Foundation that has steadily expanded as the company has reached new milestones, from building accurate vaccine utilization models in Tanzania and Mozambique to integrating with supply chains to make vaccine supplies more proactive. To help with the latter mission, Prashant Yadav recently joined the board of directors; Yadav worked as a professor of supply chain management with the MIT-Zaragoza International Logistics Program for seven years and is now a senior fellow at the Center for Global Development, a nonprofit thinktank.</p> <p>In conjunction with their work on CHAIN, the company has deployed another product, Sibyl, which uses machine learning to determine when patients are most likely to show up for appointments, to help front-desk workers at health clinics build schedules. Fels says the system has allowed hospitals to improve the efficiency of their operations so much they’ve reduced the average time patients wait to see a doctor from 55 days to 13 days.</p> <p>As a part of CHAIN, Sibyl similarly uses a range of data points to optimize schedules, allowing it to accurately predict behavior in environments where other machine learning models might struggle.</p> <p>The founders are also exploring ways to apply that approach to help direct Covid-19 patients to health clinics with sufficient capacity. That work is being developed with Sierra Leone Chief Innovation Officer David Sengeh SM ’12 PhD ’16.</p> <p><strong>Pushing frontiers</strong></p> <p>Building solutions for some of the most underdeveloped health care systems in the world might seem like a difficult way for a young company to establish itself, but the approach is an extension of macro-eyes’ founding mission of building health care solutions that can benefit people around the world equally.</p><p>“As an organization, we can never assume data will be waiting for us,” Fels says. “We’ve learned that we need to think strategically and be thoughtful about how to access or generate the data we need to fulfill our mandate: Make the delivery of health care predictive, everywhere.”</p><p>The approach is also a good way to explore innovations in mathematical fields the founders have spent their careers working in.</p><p>“Necessity is absolutely the mother of invention,” Sra says. “This is innovation driven by need.”</p><p>And going forward, the company’s work in difficult environments should only make scaling easier.</p><p><strong>“</strong>We think every day about how to make our technology more rapidly deployable, more generalizable, more highly scalable,” Sra says. “How do we get to the immense power of bringing true machine learning to the world’s most important problems without first spending decades and billions of dollars in building digital infrastructure? How do we leap into the future?”</p>
The startup macro-eyes is bringing new techniques in machine learning and artificial intelligence to global health problems like vaccine delivery and patient scheduling with its Connected Health AI Network (CHAIN). Courtesy of macro-eyes

Identifying a melody by studying a musician’s body language Music gesture artificial intelligence tool developed at the MIT-IBM Watson AI Lab uses body movements to isolate the sounds of individual instruments. Thu, 25 Jun 2020 11:25:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>We listen to music with our ears, but also our eyes, watching with appreciation as the pianist’s fingers fly over the keys and the violinist’s bow rocks across the ridge of strings. When the ear fails to tell two instruments apart, the eye often pitches in by matching each musician’s movements to the beat of each part.&nbsp;</p> <p>A <a href=”” target=”_blank”>new artificial intelligence tool</a> developed by the&nbsp;<a href=”” target=”_blank”>MIT-IBM Watson AI Lab</a>&nbsp;leverages the virtual eyes and ears of a computer to separate similar sounds that are tricky even for humans to differentiate. The tool improves on earlier iterations by matching the movements of individual musicians, via their skeletal keypoints, to the tempo of&nbsp;individual parts, allowing listeners to isolate a single flute or violin among multiple flutes or violins.&nbsp;</p> <p>Potential applications for the work range from sound mixing, and turning up the volume of an instrument in a recording, to reducing the confusion that leads people to talk over one another on a video-conference calls. The work will be presented at the virtual&nbsp;<a href=””>Computer Vision Pattern Recognition</a>&nbsp;conference this month.</p> <p>“Body keypoints provide powerful structural information,” says the study’s lead author,&nbsp;<a href=””>Chuang Gan</a>, an IBM researcher at the lab. “We use that here to improve the AI’s ability to listen and separate sound.”&nbsp;</p> <p>In this project, and in others like it, the researchers have capitalized on synchronized audio-video tracks to recreate the way that humans learn. An AI system that learns through multiple sense modalities may be able to learn faster, with fewer data, and without humans having to add pesky labels to each real-world representation. “We learn from all of our senses,” says Antonio Torralba, an MIT professor and co-senior author of the study. “Multi-sensory processing is the precursor to embodied intelligence and AI systems that can perform more complicated tasks.”</p> <p>The current tool, which uses&nbsp;body gestures&nbsp;to separate sounds, builds on earlier work that harnessed motion cues in sequences of images. Its earliest incarnation,&nbsp;PixelPlayer, let you&nbsp;<a href=”″>click on an instrument</a>&nbsp;in a concert video to make it louder or softer. An&nbsp;<a href=”″>update</a>&nbsp;to PixelPlayer allowed you to distinguish between two violins in a duet by matching each musician’s movements with the tempo of their part. This newest version adds keypoint data, favored by sports analysts to track athlete performance, to extract finer grained motion data to tell nearly identical sounds apart.</p> <p>The work highlights the importance of visual cues in training computers to have a better ear, and using sound cues to give them sharper eyes. Just as the current study uses musician pose information to isolate similar-sounding instruments, previous work has leveraged sounds to isolate similar-looking animals and objects.&nbsp;</p> <p>Torralba and his colleagues have shown that deep learning models trained on paired audio-video data can learn to&nbsp;<a href=”″>recognize natural sounds</a>&nbsp;like birds singing or waves crashing. They can also pinpoint the geographic coordinates of a&nbsp;<a href=”″>moving car</a>&nbsp;from the sound of its engine and tires rolling toward, or away from, a microphone.&nbsp;</p> <p>The latter study suggests that sound-tracking tools might be a useful addition in self-driving cars, complementing their cameras in poor driving conditions. “Sound trackers could be especially helpful at night, or in bad weather, by helping to flag cars that might otherwise be missed,” says Hang Zhao, PhD ’19, who contributed to both the motion and sound-tracking studies.</p> <p>Other authors of the CVPR music gesture study are Deng Huang and Joshua Tenenbaum at MIT.</p>
Researchers use skeletal keypoint data to match the movements of musicians with the tempo of their part, allowing listeners to isolate similar-sounding instruments. Image courtesy of the researchers.

Cynthia Breazeal named Media Lab associate director Expert in personal social robots will work with lab faculty and researchers to develop strategic research initiatives, and to explore new funding mechanisms. Fri, 19 Jun 2020 15:15:01 -0400 MIT Media Lab <p>Cynthia Breazeal has been promoted to full professor and named associate director of the Media Lab, joining the two other associate directors: Hiroshi Ishii and Andrew Lippman. Both appointments are effective July 1.</p><p>In her new associate director role, Breazeal will work with lab faculty and researchers to develop new strategic research initiatives. She will also play a key role in exploring new funding mechanisms to support broad Media Lab needs, including multi-faculty research efforts, collaborations with other labs and departments across the MIT campus, and experimental executive education opportunities.&nbsp;</p><p>“I am excited that Cynthia will be applying her tremendous energy, creativity, and intellect to rally the community in defining new opportunities for funding and research directions,” says Pattie Maes, chair of the lab’s executive committee. “As a first step, she has already organized a series of informal charrettes, where all members of the lab community can participate in brainstorming collaborations that range from tele-creativity, to resilient communities, to sustainability and climate change.”&nbsp;</p><p>Most recently, Breazeal has led an MIT collaboration between the Media Lab, MIT Stephen A. Schwarzman College of Computing, and MIT Open Learning to develop <a href=””></a>, an online learning site for grades K-12, which shares a variety of online activities for students to learn about artificial intelligence, with a focus on how to design and use AI responsibly.&nbsp;</p><p>While assuming these new responsibilities, Breazeal will continue to head the lab’s Personal Robots research group, which focuses on developing personal social robots and their potential for meaningful impact on everyday life — from educational aids for children, to pediatric use in hospitals, to at-home assistants for the elderly.</p><p>Breazeal is globally recognized as a pioneer in human-robot interaction. Her book, “Designing Sociable Robots” (MIT Press, 2002), is considered pivotal in launching the field. In 2019 she was named an AAAI fellow. Previously, she received numerous awards including the National Academy of Engineering’s Gilbreth Lecture Award and <em>MIT Technology Review</em>’s TR100/35 Award. Her robot Jibo was on the cover of <em>TIME</em> magazine in its Best Inventions list of 2017, and in 2003 she was a finalist for the National Design Awards in Communications Design. In 2014, <em>Fortune</em> magazine recognized her as one of the Most Promising Women Entrepreneurs. The following year, she was named one of <em>Entrepreneu</em>r magazine’s Women to Watch.</p> <p>Breazeal earned a BS in electrical and computer engineering from the University of California at Santa Barbara, and MS and ScD degrees from MIT in electrical engineering and computer science.</p> <div></div> <div></div> <div></div> <div></div> <div></div> <p></p>
Cynthia Breazeal has been promoted to full professor and named associate director of the Media Lab. Photo courtesy of Cynthia Breazeal.

Bringing the predictive power of artificial intelligence to health care The startup ClosedLoop has created a platform of predictive models to help organizations improve patient care. Thu, 18 Jun 2020 23:59:59 -0400 Zach Winn | MIT News Office <p>An important aspect of treating patients with conditions like diabetes and heart disease is helping them stay healthy outside of the hospital — before they to return to the doctor’s office with further complications.</p><p>But reaching the most vulnerable patients at the right time often has more to do with probabilities than clinical assessments. Artificial intelligence (AI) has the potential to help clinicians tackle these types of problems, by analyzing large datasets to identify the patients that would benefit most from preventative measures. However, leveraging AI has often required health care organizations to hire their own data scientists or settle for one-size-fits-all solutions that aren’t optimized for their patients.</p><p>Now the startup is helping health care organizations tap into the power of AI with a flexible analytics solution that lets hospitals quickly plug their data into machine learning models and get actionable results.</p><p>The platform is being used to help hospitals determine which patients are most likely to miss appointments, acquire infections like sepsis, benefit from periodic check ups, and more. Health insurers, in turn, are using ClosedLoop to make population-level predictions around things like patient readmissions and the onset or progression of chronic diseases.</p><p>“We built a health care data science platform that can take in whatever data an organization has, quickly build models that are specific to [their patients], and deploy those models,” says ClosedLoop co-founder and Chief Technology Officer Dave DeCaprio ’94. “Being able to take somebody’s data the way it lives in their system and convert that into a model that can be readily used is still a problem that requires a lot of [health care] domain knowledge, and that’s a lot of what we bring to the table.”</p><p>In light of the Covid-19 pandemic, ClosedLoop has also created a model that helps organizations identify the most vulnerable people in their region and prepare for patient surges. The open source tool, called the C-19 Index, has been used to connect high-risk patients with local resources and helped health care systems create risk scores for tens of millions of people overall.</p><p>The index is just the latest way that ClosedLoop is accelerating the health care industry’s adoption of AI to improve patient health, a goal DeCaprio has worked toward for the better part of his career.</p><p><strong>Designing a strategy</strong></p><p>After working as a software engineer for several private companies through the internet boom of the early 2000s, DeCaprio was looking to make a career change when he came across a project focused on genome annotation at the Broad Institute of MIT and Harvard.</p><p>The project was DeCaprio’s first professional exposure to the power of artificial intelligence. It blossomed into a six year stint at the Broad, after which he continued exploring the intersection of big data and health care.</p><p>“After a year in health care, I realized it was going to be really hard to do anything else,” DeCaprio says. “I’m not going to be able to get excited about selling ads on the internet or anything like that. Once you start dealing with human health, that other stuff just feels insignificant.”</p><p>In the course of his work, DeCaprio began noticing problems with the ways machine learning and other statistical techniques were making their way into health care, notably in the fact that predictive models were being applied without regard for hospitals’ patient populations.</p><p>“Someone would say, ‘I know how to predict diabetes’ or ‘I know how to predict readmissions,’ and they’d sell a model,” DeCaprio says. “I knew that wasn’t going to work, because the reason readmissions happen in a low-income population of New York City is very different from the reason readmissions happen in a retirement community in Florida. The important thing wasn’t to build one magic model but to build a system that can quickly take somebody’s data and train a model that’s specific for their problems.”</p><p>With that approach in mind, DeCaprio joined forces with former co-worker and serial entrepreneur Andrew Eye, and started ClosedLoop in 2017. The startup’s first project involved creating models that predicted patient health outcomes for the Medical Home Network (MHN), a not-for-profit hospital collaboration focused on improving care for Medicaid recipients in Chicago.</p><p>As the founders created their modeling platform, they had to address many of the most common obstacles that have slowed health care’s adoption of AI solutions.</p><p>Often the first problems startups run into is making their algorithms work with each health care system’s data. Hospitals vary in the type of data they collect on patients and the way they store that information in their system. Hospitals even store the same types of data in vastly different ways.</p><p>DeCaprio credits his team’s knowledge of the health care space with helping them craft a solution that allows customers to upload raw data sets into ClosedLoop’s platform and create things like patient risk scores with a few clicks.</p><p>Another limitation of AI in health care has been the difficulty of understanding how models get to results. With ClosedLoop’s models, users can see the biggest factors contributing to each prediction, giving them more confidence in each output.</p><p>Overall, to become ingrained in customer’s operations, the founders knew their analytics platform needed to give simple, actionable insights. That has translated into a system that generates lists, risk scores, and rankings that care managers can use when deciding which interventions are most urgent for which patients.</p><p>“When someone walks into the hospital, it’s already too late [to avoid costly treatments] in many cases,” DeCaprio says. “Most of your best opportunities to lower the cost of care come by keeping them out of the hospital in the first place.”</p><p>Customers like health insurers also use ClosedLoop’s platform to predict broader trends in disease risk, emergency room over-utilization, and fraud.</p><p><strong>Stepping up for Covid-19</strong></p><p>In March, ClosedLoop began exploring ways its platform could help hospitals prepare for and respond to Covid-19. The efforts culminated in a company hackathon over the weekend of March 16. By Monday, ClosedLoop had an open source model on GitHub that assigned Covid-19 risk scores to Medicare patients. By that Friday, it had been used to make predictions on more than 2 million patients.</p><p>Today, the model works with all patients, not just those on Medicare, and it has been used to assess the vulnerability of communities around the country. Care organizations have used the model to project patient surges and help individuals at the highest risk understand what they can do to prevent infection.</p><p>“Some of it is just reaching out to people who are socially isolated to see if there’s something they can do,” DeCaprio says. “Someone who is 85 years old and shut in may not know there’s a community based organization that will deliver them groceries.”</p><p>For DeCaprio, bringing the predictive power of AI to health care has been a rewarding, if humbling, experience.</p><p>“The magnitude of the problems are so large that no matter what impact you have, you don’t feel like you’ve moved the needle enough,” he says. “At the same time, every time an organization says, ‘This is the primary tool our care managers have been using to figure out who to reach out to,’ it feels great.”</p>
The startup, co-founded by an MIT alumnus, is using a platform of AI models to help hospitals make predictions based on their patient data. Image: MIT News, with images courtesy of the researchers

MIT and Toyota release innovative dataset to accelerate autonomous driving research DriveSeg contains precise, pixel-level representations of many common road objects, but through the lens of a continuous video driving scene. Thu, 18 Jun 2020 14:55:01 -0400 MIT AgeLab <p><em>The following was issued as a joint release from the MIT AgeLab and Toyota Collaborative Safety Research Center.</em></p> <p>How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?</p> <p>These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the <a href=””>Toyota Collaborative Safety Research Center</a> (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg<em>.</em></p> <p>Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information.</p> <p>“In sharing this dataset, we hope to encourage researchers, the industry, and other innovators to develop new insight and direction into temporal AI modeling that enables the next generation of assisted driving and automotive safety technologies,” says Bryan Reimer, principal researcher. “Our longstanding working relationship with Toyota CSRC has enabled our research efforts to impact future safety technologies.”</p> <p>“Predictive power is an important part of human intelligence,” says Rini Sherony, Toyota CSRC’s senior principal engineer. “Whenever we drive, we are always tracking the movements of the environment around us to identify potential risks and make safer decisions. By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”</p> <p>To date, self-driving data made available to the research community have primarily consisted of troves of static, single images that can be used to identify and track common objects found in and around the road, such as bicycles, pedestrians, or traffic lights, through the use of “bounding boxes.” By contrast, DriveSeg contains more precise, pixel-level representations of many of these same common road objects, but through the lens of a continuous video driving scene. This type of full-scene segmentation can be particularly helpful for identifying more amorphous objects — such as road construction and vegetation — that do not always have such defined and uniform shapes.</p> <p>According to Sherony, video-based driving scene perception provides a flow of data that more closely resembles dynamic, real-world driving situations. It also allows researchers to explore data patterns as they play out over time, which could lead to advances in machine learning, scene understanding, and behavioral prediction.</p> <p>DriveSeg is available for free and can be used by researchers and the academic community for non-commercial purposes at the links below. The data is comprised of two parts. <a href=””>DriveSeg (manual)</a> is 2 minutes and 47 seconds of high-resolution video captured during a daytime trip around the busy streets of Cambridge, Massachusetts. The video’s 5,000 frames are densely annotated manually with per-pixel human labels of 12 classes of road objects.</p> <p><a href=””>DriveSeg (Semi-auto)</a> is 20,100 video frames (67 10-second video clips) drawn from <a href=””>MIT Advanced Vehicle Technologies (AVT)</a> Consortium data. DriveSeg (Semi-auto) is labeled with the same pixel-wise semantic annotation as DriveSeg (manual), except annotations were completed through a novel semiautomatic annotation approach developed by MIT. This approach leverages both manual and computational efforts to coarsely annotate data more efficiently at a lower cost than manual annotation. This dataset was created to assess the feasibility of annotating a wide range of real-world driving scenarios and assess the potential of training vehicle perception systems on pixel labels created through AI-based labeling systems.</p> <p>To learn more about the technical specifications and permitted use-cases for the data, visit the <a href=””>DriveSeg dataset page.</a></p>
Sample frames from MIT AgeLab’s annotated video dataset Image courtesy of Li Ding, Jack Terwilliger, Rini Sherony, Bryan Reimer, and Lex Fridman.

MIT-Takeda program launches Research projects will harness the power of artificial intelligence to positively impact human health. Thu, 18 Jun 2020 14:20:01 -0400 School of Engineering <p>In February, researchers from MIT and Takeda Pharmaceuticals joined together to celebrate the official launch of the <a href=”″>MIT-Takeda Program</a>. The MIT-Takeda Program aims to fuel the development and application of artificial intelligence (AI) capabilities to benefit human health and drug development. Centered within the Abdul Latif Jameel Clinic for Machine Learning in Health (<a href=””>Jameel Clinic</a>), the program brings together the MIT School of Engineering and Takeda Pharmaceuticals, to combine knowledge and address challenges of mutual interest.&nbsp; &nbsp;</p> <p>Following a competitive proposal process, nine inaugural research projects were selected. The program’s flagship research projects include principal investigators from departments and labs spanning the School of Engineering and the Institute. Research includes diagnosis of diseases, prediction of treatment response, development of novel biomarkers, process control and improvement, drug discovery, and clinical trial optimization.</p> <p>“We were truly impressed by the creativity and breadth of the proposals we received,” says Anantha P. Chandrakasan, dean of the School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and co-chair of the MIT-Takeda Program Steering Committee.</p> <p>Engaging with researchers and industry experts from Takeda, each project team will bring together different disciplines, merging theory and practical implementation, while combining algorithm and platform innovations.</p> <p>“This is an incredible opportunity to merge the cross-disciplinary and cross-functional expertise of both MIT and Takeda researchers,” says Chandrakasan. “This particular collaboration between academia and industry is of great significance as our world faces enormous challenges pertaining to human health. I look forward to witnessing the evolution of the program and the impact its research aims to have on our society.”&nbsp;</p> <p>“The shared enthusiasm and combined efforts of researchers from across MIT and Takeda have the opportunity to shape the future of health care,” says Anne Heatherington, senior vice president and head of Data Sciences Institute (DSI) at Takeda, and co-chair of the MIT-Takeda Program Steering Committee. “Together we are building capabilities and addressing challenges through interrogation of multiple data types that we have not been able to solve with the power of humans alone that have the potential to benefit both patients and the greater community.”</p> <p>The following are the inaugural projects of the MIT-Takeda Program. Included are the MIT teams collaborating with Takeda researchers, who are leveraging AI to positively impact human health.</p> <p>”AI-enabled, automated inspection of lyophilized products in sterile pharmaceutical manufacturing”: Duane Boning, the Clarence J. LeBel Professor of Electrical Engineering and faculty co-director of the Leaders for Global Operations program; Luca Daniel, professor of electrical engineering and computer science; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director MIT Auto-ID Laboratory within the Department of Mechanical Engineering.</p> <p>”Automating adverse effect assessments and scientific literature review”: Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science and Jameel Clinic faculty co-lead; Tommi Jaakkola, the Thomas Siebel&nbsp;Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society; and Jacob Andreas, assistant professor of electrical engineering and computer science.</p> <p>”Automated analysis of speech and language deficits for frontotemporal dementia”: James Glass, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory; Sanjay Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor of Mechanical Engineering and vice president for open learning; and Brian Subirana, research scientist and director of the MIT Auto-ID Laboratory within the Department of Mechanical Engineering.</p> <p>”Discovering human-microbiome protein interactions with continuous distributed representation”: Jim Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science and Department of Biological Engineering, Jameel Clinic faculty co-lead, and MIT-Takeda Program faculty lead; and Timothy Lu, associate professor of electrical engineering and computer science and of biological engineering.</p> <p>”Machine learning for early diagnosis, progression risk estimation, and identification of non-responders to conventional therapy for inflammatory bowel disease”: Peter Szolovits, professor of computer science and engineering, and David Sontag, associate professor of electrical engineering and computer science.</p> <p>”Machine learning for image-based liver phenotyping and drug discovery”: Polina Golland, professor of electrical engineering and computer science; Brian W. Anthony, principal research scientist in the Department of Mechanical Engineering; and Peter Szolovits, professor of computer science and engineering.</p> <p>”Predictive in silico models for cell culture process development for biologics manufacturing”: Connor W. Coley, assistant professor of chemical engineering, and J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering.</p> <p>”Automated data quality monitoring for clinical trial oversight via probabilistic programming”: Vikash Mansinghka, principal research scientist in the Department of Brain and Cognitive Sciences; Tamara Broderick, associate professor<em> </em>of electrical engineering and computer science; David Sontag, associate professor of electrical engineering and computer science; Ulrich Schaechtle, research scientist in the Department of Brain and Cognitive Sciences; and Veronica Weiner, director of special projects for the MIT Probabilistic Computing Project.</p> <p>”Time series analysis from video data for optimizing and controlling unit operations in production and manufacturing”: Allan S. Myerson, professor of chemical engineering; George Barbastathis, professor of mechanical engineering; Richard Braatz, the Edwin R. Gilliland Professor of Chemical Engineering; and Bernhardt Trout, the Raymond F. Baddour, ScD, (1949) Professor of Chemical Engineering.</p> <p>“The flagship research projects of the MIT-Takeda Program offer real promise to the ways we can impact human health,” says Jim Collins. “We are delighted to have the opportunity to collaborate with Takeda researchers on advances that leverage AI and aim to shape health care around the globe.”</p>
Researchers present at the MIT-Takeda launch event earlier this year.

What jumps out in a photo changes the longer we look Researchers capture our shifting gaze in a model that suggests how to prioritize visual information based on viewing duration. Wed, 17 Jun 2020 14:35:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>What seizes your attention at first glance might change with a closer look. That elephant dressed in&nbsp;red wallpaper&nbsp;might initially grab your eye until&nbsp;your gaze&nbsp;moves to the woman on the living room couch and the surprising realization that the pair appear to be sharing a quiet moment together.</p> <p>In a study being presented at the virtual&nbsp;<a href=”” target=”_blank”>Computer Vision and Pattern Recognition</a>&nbsp;conference this week, researchers show that our attention moves in distinctive ways the longer we stare at an image, and that these viewing patterns can be replicated by artificial intelligence models. The work suggests immediate ways of improving how visual content is teased and eventually displayed online. For example, an automated cropping tool might zoom in on the elephant for a thumbnail&nbsp;preview or zoom out to include the intriguing details that&nbsp;become visible once a reader clicks on the story.</p> <p>“In the real world, we look at the scenes around us and our attention also moves,” says&nbsp;<a href=””>Anelise Newman</a>, the study’s co-lead author and a master’s student at MIT. “What captures our interest over time varies.” The study’s senior authors are&nbsp;<a href=””>Zoya Bylinskii</a> PhD ’18,&nbsp;a research scientist at Adobe Research,&nbsp;and&nbsp;<a href=””>Aude Oliva</a>, co-director of the MIT Quest for Intelligence and a senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory.</p> <p>What researchers know about saliency, and how humans perceive images, comes from experiments in which&nbsp;participants are shown pictures for a fixed period of time. But in the real world, human attention often shifts abruptly. To simulate this variability, the researchers used a crowdsourcing user interface called CodeCharts to show participants photos at three durations — half a second, 3 seconds, and 5 seconds — in a set of online experiments.&nbsp;</p> <p>When the image disappeared, participants were asked to report where they had last looked by typing in a three-digit code on a gridded map corresponding to the image. In the end, the researchers were able to gather heat maps of where in a given image participants had collectively focused their gaze at different moments in time.&nbsp;</p> <p>At the split-second interval, viewers focused on faces or a visually dominant animal or object. By 3 seconds, their gaze had shifted to action-oriented features, like a dog on a leash, an archery target, or an airborne frisbee. At 5 seconds, their gaze either shot back, boomerang-like, to the main subject, or it lingered on the suggestive details.&nbsp;</p> <p>“We were surprised at just how consistent these viewing patterns were at different durations,” says the study’s other lead author,&nbsp;<a href=””>Camilo Fosco</a>, a PhD student at MIT.</p> <p>With&nbsp;real-world&nbsp;data in hand, the researchers next trained a deep learning model to predict the focal points of images it had never seen before, at different viewing durations. To reduce the size of their model, they included a recurrent module that works on compressed representations of the input image, mimicking the human gaze as it explores an image at varying durations. When tested, their model outperformed the state of the art at predicting saliency across viewing durations.</p> <p>The model has potential applications for editing and rendering compressed images and even improving the accuracy of automated image captioning. In addition to guiding an editing tool to crop an image for shorter or longer viewing durations, it could prioritize which elements in a compressed image to render first for viewers. By clearing away the visual clutter in a scene, it could improve the overall accuracy of current photo-captioning techniques. It could also generate captions for images meant for split-second viewing only.&nbsp;</p> <p>“The content that you consider most important depends on the time you have to look at it,” says Bylinskii. “If you see the full image at once, you may not have time to absorb it all.”</p> <p>As more images and videos are shared online, the need for better tools to find and make sense of relevant content is growing. Research on human attention offers insights for technologists. Just as computers and camera-equipped mobile phones helped create the data overload, they are also giving researchers new platforms for studying human attention and designing better tools to help us cut through the noise.</p> <p>In a related study accepted to the&nbsp;<a href=””>ACM Conference on Human Factors in Computing Systems</a>, researchers outline the relative benefits of four web-based user interfaces, including CodeCharts, for gathering human attention data at scale. All four tools capture attention without relying on traditional eye-tracking hardware in a lab, either by collecting self-reported gaze data, as CodeCharts does, or by recording where subjects click their mouse or zoom in on an image.</p> <p>“There’s no one-size-fits-all interface that works for all use cases, and our paper focuses on teasing apart these trade-offs,” says Newman, lead author of the study.</p> <p>By making it faster and cheaper to gather human attention data, the platforms may help to generate new knowledge on human vision and cognition. “The more we learn about how humans see and understand the world, the more we can build these insights into our AI tools to make them more useful,” says Oliva.</p> <p>Other authors of the CVPR paper are Pat Sukhum, Yun Bin Zhang, and Nanxuan Zhao. The research was supported by the Vannevar Bush Faculty Fellowship program, an Ignite grant from the SystemsThatLearn@CSAIL, and cloud computing services from MIT Quest.</p>
An MIT study shows viewers’ attention shifts the longer they gaze at an image. Given just a half-second to look at the photo at left, in online experiments, they focused on the elephant, as shown in this heat map. Image courtesy of the researchers.

Learning the ropes and throwing lifelines PhD student Geeticka Chauhan draws on her experiences as an international student to strengthen the bonds of her MIT community. Mon, 08 Jun 2020 23:59:59 -0400 Sofia Tong | MIT News correspondent <p>In March, as her friends and neighbors were scrambling to pack up and leave campus due to the Covid-19 pandemic, Geeticka Chauhan found her world upended in yet another way. Just weeks earlier, she had been elected council president of MIT’s largest graduate residence, Sidney-Pacific. Suddenly the fourth-year PhD student was plunged into rounds of emergency meetings with MIT administrators.</p><p>From her apartment in Sidney-Pacific, where she has stayed put due to travel restrictions in her home country of India, Chauhan is still learning the ropes of her new position. With others, she has been busy preparing to meet the future challenge of safely redensifying the living space of more than 1,000 people: how to regulate high-density common areas, handle noise complaints as people spend more time in their rooms, and care for the mental and physical well-being of a community that can only congregate virtually. “It’s just such a crazy time,” she says.</p><p>She’s prepared for the challenge. During her time at MIT, while pursuing her research using artificial intelligence to understand human language, Chauhan has worked to strengthen the bonds of her community in numerous ways, often drawing on her experience as an international student to do so.</p><p><strong>Adventures in brunching</strong></p><p>When Chauhan first came to MIT in 2017, she quickly fell in love with Sidney-Pacific’s thriving and freewheeling “helper culture.” “These are all researchers, but they’re maybe making brownies, doing crazy experiments that they would do in lab, except in the kitchen,” she says. “That was my first introduction to the MIT spirit.”</p><p>Next thing she knew, she was teaching Budokon yoga, mashing chickpeas into guacamole, and immersing herself in the complex operations of a <a href=”″ target=”_blank”>monthly brunch</a> attended by hundreds of graduate students, many of whom came to MIT from outside the U.S. In addition to the genuine thrill of cracking <a href=”” target=”_blank”>300 eggs in 30 minutes</a>, working on the brunches kept her grounded in a place thousands of miles from her home in New Delhi. “It gave me a sense of community and made me feel like I have a family here,” she says.</p><p>Chauhan has found additional ways to address the particular difficulties that international students face. As a member of the Presidential Advisory Council this year, she gathered international student testimonies on visa difficulties and presented them to MIT’s president and the director of the International Students Office. And when a friend from mainland China had to self-quarantine on Valentine’s Day, Chauhan knew she had to act. As brunch chair, she organized food delivery, complete with chocolates and notes, for Sidney-Pacific residents who couldn’t make it to the monthly event. “Initially when you come back to the U.S. from your home country, you really miss your family,” she says. “I thought self-quarantining students should feel their MIT community cares for them.”</p><p><strong>Culture shock</strong></p><p>Growing up in New Delhi, math was initially one of her weaknesses, Chauhan says, and she was scared and confused by her early introduction to coding. Her mother and grandmother, with stern kindness and chocolates, encouraged her to face these fears. “My mom used to teach me that with hard work, you can make your biggest weakness your biggest strength,” she explains. She soon set her sights on a future in computer science.</p><p>However, as Chauhan found her life increasingly dominated by the high-pressure culture of preparing for college, she began to long for a feeling of wholeness, and for the person she left behind on the way. “I used to have a lot of artistic interests but didn’t get to explore them,” she says. She quit her weekend engineering classes, enrolled in a black and white photography class, and after learning about the extracurricular options at American universities, landed a full scholarship to attend Florida International University.</p><p>It was a culture shock. She didn’t know many Indian students in Miami and felt herself struggling to reconcile the individualistic mindset around her with the community and family-centered life at home. She says the people she met got her through, including <a href=”” target=”_blank”>Mark Finlayson</a>, a professor studying the science of narrative from the viewpoint of natural language processing. Under Finlayson’s guidance she developed a fascination with the way AI techniques could be used to better understand the patterns and <a href=”” target=”_blank”>structures in human narratives</a>. She learned that studying AI wasn’t just a way of imitating human thinking, but rather an approach for deepening our understanding of ourselves as reflected by our language. “It was due to Mark’s mentorship that I got involved in research” and applied to MIT, she says.</p><p><strong>The holistic researcher</strong></p><p>Chauan now works in the Clinical Decision Making Group led by Peter Szolovits at the Computer Science and Artificial Intelligence Laboratory, where she is focusing on the ways natural language processing can address health care problems. For her master’s project, she worked on the problem of relation extraction and built a tool to digest clinical literature that would, for example, help pharamacologists easily assess negative drug interactions. Now, she’s finishing up a <a href=”” target=”_blank”>project</a> integrating visual analysis of chest radiographs and textual analysis of radiology reports for quantifying pulmonary edema, to help clinicians manage the fluid status of their patients who have suffered acute heart failure.</p><p>“In routine clinical practice, patient care is interweaved with a lot of bureaucratic work,” she says. “The goal of my lab is to assist with clinical decision making and give clinicians the full freedom and time to devote to patient care.”</p><p>It’s an exciting moment for Chauhan, who recently submitted a paper she co-first authored with another grad student, and is starting to think about her next project: interpretability, or how to elucidate a decision-making model’s “thought process” by highlighting the data from which it draws its conclusions. She continues to find the intersection of computer vision and natural language processing an exciting area of research. But there have been challenges along the way.</p><p>After the initial flurry of excitement her first year, personal and faculty expectations of students’ independence and publishing success grew, and she began to experience uncertainty and imposter syndrome. “I didn’t know what I was capable of,” she says. “That initial period of convincing yourself that you belong is difficult. I am fortunate to have a supportive advisor that understands that.”</p><p>Finally, one of her first-year projects showed promise, and she came up with a master’s thesis plan in a month and submitted the project that semester. To get through, she says, she drew on her “survival skills”: allowing herself to be a full person beyond her work as a researcher so that one setback didn’t become a sense of complete failure. For Chauhan, that meant working as a teaching assistant, drawing henna designs, singing, enjoying yoga, and staying involved in student government. “I used to try to separate that part of myself with my work side,” she says. “I needed to give myself some space to learn and grow, rather than compare myself to others.”</p><p>Citing a <a href=”” target=”_blank”>study</a> showing that women are more likely to drop out of STEM disciplines when they receive a B grade in a challenging course, Chauhan says she wishes she could tell her younger self not to compare herself with an ideal version of herself. Dismantling imposter syndrome requires an understanding that qualification and success can come from a broad range of experiences, she says: It’s about “seeing people for who they are holistically, rather than what is seen on the resume.”</p>
PhD student Geeticka Chauhan is finishing up a project integrating visual analysis of chest radiographs and textual analysis of radiology reports, to help clinicians assess the proper balance of treatments for acute heart failure. Illustration: Jose-Luis Olivares, MIT

Engineers put tens of thousands of artificial brain synapses on a single chip The design could advance the development of small, portable AI devices. Mon, 08 Jun 2020 12:18:05 -0400 Jennifer Chu | MIT News Office <p>MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.</p><p>The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.</p><p>Their results, published today in the journal <em>Nature Nanotechnology</em>, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.</p><p>“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”</p><p><strong>Wandering ions</strong></p><p>Memristors, or memory transistors, are an essential element in neuromorphic computing. In a neuromorphic device, a memristor would serve as the transistor in a circuit, though its workings would more closely resemble a brain synapse — the junction between two neurons. The synapse receives signals from one neuron, in the form of ions, and sends a corresponding signal to the next neuron.</p><p>A transistor in a conventional circuit transmits information by switching between one of only two values, 0 and 1, and doing so only when the signal it receives, in the form of an electric current, is of a particular strength. In contrast, a memristor would work along a gradient, much like a synapse in the brain. The signal it produces would vary depending on the strength of the signal that it receives. This would enable a single memristor to have many values, and therefore carry out a far wider range of operations than binary transistors.</p><p>Like a brain synapse, a memristor would also be able to “remember” the value associated with a given current strength, and produce the exact same signal the next time it receives a similar current. This could ensure that the answer to a complex equation, or the visual classification of an object, is reliable — a feat that normally involves multiple transistors and capacitors.</p><p>Ultimately, scientists envision that memristors would require far less chip real estate than conventional transistors, enabling powerful, portable computing devices that do not rely on supercomputers, or even connections to the Internet.</p><p>Existing memristor designs, however, are limited in their performance. A single memristor is made of a positive and negative electrode, separated by a “switching medium,” or space between the electrodes. When a voltage is applied to one electrode, ions from that electrode flow through the medium, forming a “conduction channel” to the other electrode. The received ions make up the electrical signal that the memristor transmits through the circuit. The size of the ion channel (and the signal that the memristor ultimately produces) should be proportional to the strength of the stimulating voltage.</p><p>Kim says that existing memristor designs work pretty well in cases where voltage stimulates a large conduction channel, or a heavy flow of ions from one electrode to the other. But these designs are less reliable when memristors need to generate subtler signals, via thinner conduction channels.</p><p>The thinner a conduction channel, and the lighter the flow of ions from one electrode to the other, the harder it is for individual ions to stay together. Instead, they tend to wander from the group, disbanding within the medium. As a result, it’s difficult for the receiving electrode to reliably capture the same number of ions, and therefore transmit the same signal, when stimulated with a certain low range of current.</p><p><strong>Borrowing from metallurgy</strong></p><p>Kim and his colleagues found a way around this limitation by borrowing a technique from metallurgy, the science of melding metals into alloys and studying their combined properties.</p><p>“Traditionally, metallurgists try to add different atoms into a bulk matrix to strengthen materials, and we thought, why not tweak the atomic interactions in our memristor, and add some alloying element to control the movement of ions in our medium,” Kim says.</p><p>Engineers typically use silver as the material for a memristor’s positive electrode. Kim’s team looked through the literature to find an element that they could combine with silver to effectively hold silver ions together, while allowing them to flow quickly through to the other electrode.</p><p>The team landed on copper as the ideal alloying element, as it is able to bind both with silver, and with silicon.</p><p>“It acts as a sort of bridge, and stabilizes the silver-silicon interface,” Kim says.</p><p>To make memristors using their new alloy, the group first fabricated a negative electrode out of silicon, then made a positive electrode by depositing a slight amount of copper, followed by a layer of silver. They sandwiched the two electrodes around an amorphous silicon medium. In this way, they patterned a millimeter-square silicon chip with tens of thousands of memristors.</p><p>As a first test of the chip, they recreated a gray-scale image of the Captain America shield. They equated each pixel in the image to a corresponding memristor in the chip. They then modulated the conductance of each memristor that was relative in strength to the color in the corresponding pixel.</p><p>The chip produced the same crisp image of the shield, and was able to “remember” the image and reproduce it many times, compared with chips made of other materials.</p><p>The team also ran the chip through an image processing task, programming the memristors to alter an image, in this case of MIT’s Killian Court, in several specific ways, including sharpening and blurring the original image. Again, their design produced the reprogrammed images more reliably than existing memristor designs.</p><p>“We’re using artificial synapses to do real inference tests,” Kim says. “We would like to develop this technology further to have larger-scale arrays to do image recognition tasks. And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”</p><p>This research was funded, in part, by the MIT Research Support Committee funds, the MIT-IBM Watson AI Lab, Samsung Global Research Laboratory, and the National Science Foundation.</p>
A close-up view of a new neuromorphic “brain-on-a-chip” that includes tens of thousands of memristors, or memory transistors. Credit: Peng Lin

Giving soft robots feeling In a pair of papers from MIT CSAIL, two teams enable better sense and perception for soft robotic grippers. Mon, 01 Jun 2020 09:00:00 -0400 Rachel Gordon | MIT CSAIL <p>One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.</p> <p>In a new pair of papers, researchers from MIT’s <a href=””>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.&nbsp;</p> <p>“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.&nbsp;</p>
<p><a href=”″>One paper</a> builds off last year’s <a href=”″>research</a> from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.&nbsp;</p> <p>To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.&nbsp;</p> <p>When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.</p> <p>“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”&nbsp;</p> <p>In <a href=””>a second paper</a>, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).&nbsp;</p> <p>The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.&nbsp;</p> <p>“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”&nbsp;</p> <p><strong>Magic ball senses&nbsp;</strong></p> <p>The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.&nbsp;</p> <p>While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.&nbsp;&nbsp;</p> <p>When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.&nbsp;</p> <p>In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.&nbsp;</p> <p>The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.&nbsp;</p> <p>Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.&nbsp;</p> <p>Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.&nbsp;</p> <p><strong>GelFlex</strong></p> <p>In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.</p> <p>To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.&nbsp;</p> <p>The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.&nbsp;</p> <p>During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.&nbsp;</p> <p>In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.</p> <p>Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.</p> <p></p>
Professor Ted Adelson’s team created a soft robotic finger that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body). Photo courtesy of the researchers.

Undergraduates develop next-generation intelligence tools UROP students explore applications in robotics, health care, language understanding, and nuclear engineering. Tue, 26 May 2020 14:35:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>The coronavirus pandemic has driven us apart physically while reminding us of the power of technology to connect. When MIT shut its doors in March, much of campus moved online, to virtual classes, labs, and chatrooms. Among those making the pivot were students engaged in independent research under MIT’s Undergraduate Research Opportunities Program (UROP).&nbsp;</p> <p>With regular check-ins with their advisors via Slack and Zoom, many students succeeded in pushing through to the end. One even carried on his experiments from his bedroom, after schlepping&nbsp;his Sphero Bolt robots home in a backpack. “I’ve been so impressed by their resilience and dedication,” says Katherine Gallagher, one of three artificial intelligence engineers at MIT Quest for Intelligence who works with students each semester on intelligence-related applications. “There was that initial week of craziness and then they were right back to work.” Four projects from this spring are highlighted below.</p> <p><strong>Learning to explore the world with open eyes and ears</strong></p> <p>Robots rely heavily on images beamed through their built-in cameras, or surrogate “eyes,” to get around. MIT senior Alon Kosowsky-Sachs thinks they could do a lot more if they also used their microphone “ears.”&nbsp;</p> <p>From his home in Sharon, Massachusetts, where he retreated after MIT closed in March, Kosowsky-Sachs is training four baseball-sized Sphero Bolt robots to roll around a homemade arena. His goal is to teach the robots to pair sights with sounds, and to exploit this information to build better representations of their environment. He’s working with&nbsp;<a href=””>Pulkit Agrawal</a>, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science, who is interested in designing algorithms with human-like curiosity.</p> <p>While Kosowsky-Sachs sleeps, his robots putter away, gliding&nbsp;through an object-strewn rink he built for them from two-by-fours. Each burst of movement becomes a pair of one-second video and audio clips. By day, Kosowsky-Sachs trains a “curiosity” model aimed&nbsp;at pushing the robots to become bolder, and more skillful, at navigating their obstacle course.</p> <p>“I want them to see something through their camera, and hear something from their microphone, and know that these two things happen together,” he says. “As humans, we combine a lot of sensory information to get added insight about the world. If we hear a thunder clap, we don’t need to see lightning to know that a storm has arrived. Our hypothesis is that robots with a better model of the world will be able to accomplish more difficult tasks.”</p> <p><strong>Training a robot agent to design a more efficient nuclear reactor&nbsp;</strong></p> <p>One important factor driving the cost of nuclear power is the layout of its reactor core. If fuel rods are arranged in an optimal fashion, reactions last longer, burn less fuel, and need less maintenance. As engineers look for ways to bring down the cost of nuclear energy, they are eying the redesign of the reactor core.</p> <p>“Nuclear power emits very little carbon and is surprisingly safe compared to other energy sources, even solar or wind,” says third-year student Isaac Wolverton. “We wanted to see if we could use AI to make it more efficient.”&nbsp;</p> <p>In a project with Josh Joseph, an AI engineer at the MIT Quest, and&nbsp;<a href=””>Koroush Shirvan</a>, an assistant professor in MIT’s Department of Nuclear Science and Engineering, Wolverton spent the year training a reinforcement learning agent to find the best way to lay out fuel rods in a reactor core. To simulate the process, he turned the problem into a game, borrowing a machine learning technique for producing agents with superhuman abilities at chess and Go.</p> <p>He started by training his agent on a simpler problem: arranging colored tiles on a grid so that as few tiles as possible of the same color would touch. As Wolverton increased the number of options, from two colors to five, and four tiles to 225, he grew excited as the agent continued to find the best strategy. “It gave us hope we could teach it to swap the cores into an optimal arrangement,” he says.</p> <p>Eventually, Wolverton moved to an environment meant to simulate a 36-rod reactor core, with two enrichment levels and 2.1 million possible core configurations. With input from researchers in Shirvan’s lab, Wolverton trained an agent that arrived at the optimal solution.</p> <p>The lab is now building on Wolverton’s code to try to train an agent in a life-sized 100-rod environment with 19 enrichment levels.&nbsp;“There’s no breakthrough at this point,” he says. “But we think it’s possible, if we can find enough compute resources.”</p> <p><strong>Making more livers available to patients who need them</strong></p> <p>About 8,000 patients in the United States receive liver transplants each year, but that’s only half the number who need one. Many more livers might&nbsp;be made available if hospitals had a faster way to screen them, researchers say. In a collaboration with&nbsp;Massachusetts General Hospital, MIT Quest is evaluating whether automation could help to boost the nation’s supply of viable livers.&nbsp;&nbsp;</p> <p>In approving&nbsp;a liver for transplant, pathologists estimate its fat content from a slice of tissue. If it’s low enough, the liver is deemed ready for transplant. But&nbsp;there are often not enough qualified doctors to review tissue samples&nbsp;on the tight timeline needed to match livers with recipients.&nbsp;A shortage of doctors, coupled with the subjective nature of analyzing tissue, means that viable livers are inevitably discarded.</p> <p>This loss represents a huge opportunity for machine learning, says third-year student Kuan Wei Huang, who joined the project to explore AI applications in health care. The project involves training a deep neural network to pick out globules of fat on&nbsp;liver tissue slides to estimate the liver’s overall fat content.</p> <p>One challenge, says Huang, has been figuring out how to handle variations in how various pathologists classify fat globules. “This makes it harder to tell whether I’ve created the appropriate masks to feed into the neural net,” he says. “However, after meeting with experts in the field, I received clarifications and was able to continue working.”</p> <p>Trained on images labeled by pathologists, the model will eventually learn to isolate fat globules&nbsp;in unlabeled images on its own. The final output will be&nbsp;a fat content estimate with pictures of highlighted fat globules showing how the model arrived at its final count. “That’s the easy part — we just count up the&nbsp;pixels in the highlighted globules&nbsp;as a percentage of the overall biopsy and we have our fat content estimate,” says the Quest’s Gallagher, who is leading the project.</p> <p>Huang says he’s excited by the project’s potential to help people. “Using machine learning to address medical problems is one of the best ways that a computer scientist can impact the world.”</p> <p><strong>Exposing the hidden constraints of what we mean in what we say</strong></p> <p>Language shapes our understanding of the world in subtle ways, with slight variations in the words we use conveying sharply different meanings. The sentence, “Elephants live in Africa and Asia,” looks a lot like the sentence “Elephants eat twigs and leaves.”&nbsp;But most readers will conclude that the elephants in the first sentence are split into distinct groups living on separate continents but not apply the same reasoning to the second sentence, because eating twigs and eating leaves can both be true of the same elephant in a way that living on different continents cannot.</p> <p>Karen Gu is a senior majoring in computer science and molecular biology, but instead of putting cells under a microscope for her SuperUROP project, she chose to look at sentences like the ones above. “I’m fascinated by the complex and subtle things that we do to constrain language understanding, almost all of it subconsciously,” she says.</p> <p>Working with&nbsp;<a href=””>Roger Levy</a>, a professor in MIT’s Department of Brain and Cognitive Sciences, and postdoc MH Tessler, Gu explored how prior knowledge guides our interpretation of syntax and ultimately, meaning. In the sentences above, prior knowledge about geography and mutual exclusivity interact with syntax to produce different meanings.</p> <p>After steeping herself in linguistics theory, Gu built a model to explain how, word by word, a given sentence produces meaning. She then ran a set of online experiments to see how human subjects would interpret analogous sentences in a story. Her experiments, she says, largely validated intuitions from linguistic theory.</p> <p>One challenge, she says, was having to reconcile two approaches for studying language. “I had to figure out how to combine formal linguistics, which applies an almost mathematical approach to understanding how words combine, and probabilistic semantics-pragmatics, which has focused more on how people interpret whole utterances.’ “</p> <p>After MIT closed in March, she was able to finish the project from her parents’ home in East Hanover, New Jersey. “Regular meetings with my advisor have been really helpful in keeping me motivated and on track,” she says. She says she also got to improve her web-development skills, which will come in handy when she starts work at Benchling, a San Francisco-based software company, this summer.</p> <p>Spring semester Quest UROP projects were funded, in part, by the MIT-IBM Watson AI Lab and Eric Schmidt,&nbsp;technical advisor to Alphabet Inc., and his wife, Wendy.</p>
Students participating in MIT Quest for Intelligence-funded UROP projects include: (clockwise from top left) Alon Kosowsky-Sachs, Isaac Wolverton, Kuan Wei Huang, and Karen Gu. Photo collage: Samantha Smiley

Fireflies helps companies get more out of meetings Startup makes meeting data searchable and shareable, and automates data entry into workplace apps. Thu, 21 May 2020 12:16:48 -0400 Zach Winn | MIT News Office <p>Many decisions are made and details sorted out in a productive business meeting. But in order for that meeting to translate into results, participants have to remember all those details, understand their assignments, and follow through on commitments.</p><p>The startup is helping people get the most out of their meetings with a note-taking, information-organizing virtual assistant named Fred. Fred transcribes every word of meetings and then uses artificial intelligence to help people sort and share that information later on.</p><p>“There’s a tremendous amount of data generated in meetings that can help your team stay on the same page,” says Sam Udotong ’16, who founded the company with Krish Ramineni in 2016. “We let people capture that data, search through it, and then share it to the places that matter most.”</p><p>The tool integrates with popular meeting and scheduling software like Zoom and Google Calendar so users can quickly add Fred to calls. It also works with collaboration platforms like Slack and customer management software like Salesforce to help ensure plans turn into coordinated action.</p><p>Fireflies is used by people working in roles including sales, recruiting, and product management. They can use the service to automate project management tasks, screen candidates, and manage internal team communications.</p><p>In the last few months, driven in part by the Covid-19 pandemic, Fred has sat through millions of minutes of meetings involving more than half a million people. And the founders believe Fred can do more than simply help people adjust to remote work; it can also help them collaborate more effectively than ever before.</p><p>“[Fred] is giving you perfect memory,” says Udotong, who serves as Firelies’ chief technology officer. “The dream is for everyone to have perfect recall and make all their decisions based on the right information. So being able to search back to exact points in conversation and remember that is powerful. People have told us it makes them look smarter in front of clients.”</p><p><strong>Taking the leap</strong></p><p>Udotong was introduced to the power of machine learning in his first year at MIT while working on a project in which students built a drone that could lead people on campus tours. Later, during his first MIT hackathon, he sought to use machine learning in a cryptography solution. That’s when he met Ramineni, who was a student at the University of Pennsylvania. That’s also when Fireflies was born — although the founders would go on to change everything about the company besides its name as they sought to use artificial intelligence to improve efficiency in a range of fields.</p><p>“We ended up building six iterations of Fireflies before this current meeting assistant,” Udotong remembers. “And every time we would build a different iteration, we would tell our friends, ‘Download it, use it, and get back to us next week, we’ll grab coffee.’ We were making all these agreements and promises, and it became really challenging to keep track of all the conversations we were having to get our products out there. We thought, ‘What if we just had an AI that could keep track of conversations for us?’”</p><p>The founders’ initial note-taking solution, built in short bursts between classes and homework, tracked action items written in messages, sending reminders to users later on.</p><p>Following Udotong’s graduation with a degree in aeronautics and astronautics in 2016, the founders decided to use a $25,000 stipend they received from Rough Draft Ventures, along with $5,000 from the MIT Sandbox Innovation Fund, to work on Fireflies through the summer.</p><p>The plan was to work on Fireflies for another short burst: Ramineni was already making plans to attend Cambridge University for his master’s degree in the fall, and Udotong was weighing acceptance letters from graduate schools as well as job offers. By July, however, the founders had changed their plans.</p><p>“I think deciding [on a career path] is really hard these days, even if you identify your passion,” Udotong says. “The easy path for someone in tech is to follow the money and go work for Google or Facebook. We decided to go a different route and take the risk.”</p><p>They moved to Ramineni’s hometown of San Francisco to officially launch the company. Udotong remembers getting to San Francisco with $100 dollars in his bank account.</p><p>The founders had fully committed themselves to Fireflies, but it didn’t make starting the company any easier. They decided not to raise venture capital in the company’s early years, and Ramineni admits to questioning whether going all in on Fireflies was the right decision as recently at 2018.</p><p>The founders also weren’t sure a radically new software category would be embraced so readily by businesses. They continued to invest in the voice AI space, as they believed that the need for their technology was growing and the timing was right.</p><p>“We realized that there’s a ton of data generated every day through speech, either in meetings like Zoom or in person,” Ramineni says. “Today, two hours after your meeting, unless you’re taking good notes or recording, you’re not going to be able to recall everything. You might not even remember what action items you agreed to a few hours ago. It’s such a common problem that people don’t even know it’s an issue. You have meetings and you expect things to slip through the cracks.”</p><p><strong>Illuminating conversations</strong></p><p>Today the Fireflies solution shows little trace of the arduous journey the founders took to get to this point. In fact, building simplicity into the tool has been a major focus for the founders.</p><p>Fred can join calendar events automatically or be added to meetings using the <a href=””></a> address. Fred joins Zoom, Google Meet, Skype, or Microsoft calls as a participant, silently transcribing and generating notes from the meeting. After the meeting, the AI assistant sends a full transcript to whomever the organizer chooses, allowing users to click on sections of the transcript to hear that part of the meeting audio. Users can also search the transcript and go through an hourlong meeting in five minutes, according to the company. The transcript can also surface action items, tasks, metrics, pricing, and other topics of interest.</p><p>After each meeting, Fireflies can automatically sync all this meeting data into apps from companies like Slack, Salesforce, and Hubspot.</p><p>“Fireflies is like a personal assistant that helps connect your systems of communication with your systems of record,” Udotong says. “If you’re having these meetings over Zoom and Google Meet every day, and you’re interacting with Slack or Trello, Fireflies is that middle router that can bring synchronicity to your work life.”</p><p>In the midst of the Covid-19 pandemic, millions of companies have been forced to operate remotely, and the founders think the impact of that response will be felt for far longer than the virus.</p><p>“I think the world’s now realizing that people can be fully distributed,” says Ramineni, who notes Fireflies’ team has been remote since he and Udotong began working together in college hackathons from different campuses in 2014.</p><p>And as the company has grown, customers have begun using Fred for use cases the founders hadn’t even considered, like sending Fred to meetings that they can’t attend and reviewing the notes later on. Customers, the founders believe, are realizing that being able to quickly search, sort, and otherwise collaborate across audio data unlocks a world of new possibilities.</p><p>“It’s kind of like what Google did with search,” Udotong says. “There was five to 10 years of web data building up, and there was no way for people to find what they were looking for. The same thing is true today of audio and meeting data. It’s out there, but there’s no way to actually find what you’re looking for because it’s never even stored in the first place.”</p>
The startup is helping people get the most out of their meetings with a note-taking, information-organizing virtual assistant named Fred. Image courtesy of the researchers

Machine-learning tool could help develop tougher materials Engineers develop a rapid screening system to test fracture resistance in billions of potential materials. Wed, 20 May 2020 11:00:00 -0400 David L. Chandler | MIT News Office <p>For engineers developing new materials or protective coatings, there are billions of different possibilities to sort through. Lab tests or even detailed computer simulations to determine their exact properties, such as toughness, can take hours, days, or more for each variation. Now, a new artificial intelligence-based approach developed at MIT could reduce that to a matter of milliseconds, making it practical to screen vast arrays of candidate materials.</p><p>The system, which MIT researchers hope could be used to develop stronger protective coatings or structural materials — for example, to protect aircraft or spacecraft from impacts — is described in a paper in the journal <em>Matter</em>, by MIT postdoc Chi-Hua Yu, civil and environmental engineering professor and department head Markus J. Buehler, and Yu-Chuan Hsu at the National Taiwan University.</p><p>The focus of this work was on predicting the way a material would break or fracture, by analyzing the propagation of cracks through the material’s molecular structure. Buehler and his colleagues have spent many years studying fractures and other failure modes in great detail, since understanding failure processes is key to developing robust, reliable materials. “One of the specialties of my lab is to use what we call molecular dynamics simulations, or basically atom-by-atom simulations” of such processes, Buehler says.</p><p>These simulations provide a chemically accurate description of how fracturing happens, he says. But it’s slow, because it requires solving equations of motion for every single atom. “It takes a lot of time to simulate these processes,” he says. The team decided to explore ways of streamlining that process, using a machine-learning system.</p><p>“We’re kind of taking a detour,” he says. “We’ve been asking, what if you had just the observation of how fracturing happens [in a given material], and let computers learn this relationship itself?” To do that, artificial intelligence (AI) systems need a variety of examples to use as a training set, to learn about the correlations between the material’s characteristics and its performance.</p><p>In this case, they were looking at a variety of composite, layered coatings made of crystalline materials. The variables included the composition of the layers and the relative orientations of their orderly crystal structures, and the way those materials each responded to fracturing, based on the molecular dynamics simulations. “We basically simulate, atom by atom, how materials break, and we record that information,” Buehler says.</p> <p><img alt=”” src=”/sites/default/files/images/inline/images/predicting-fractures-2.gif” style=”width: 500px; height: 375px;” /></p> <p><em><span style=”font-size:10px;”>The team used atom-by-atom simulations to determine how cracks propagate through different materials. This animation shows one such simulation, in which the crack propagates all the way through.</span></em></p><p>They painstakingly generated hundreds of such simulations, with a wide variety of structures, and subjected each one to many different simulated fractures. Then they fed large amounts of data about all these simulations into their AI system, to see if it could discover the underlying physical principles and predict the performance of a new material that was not part of the training set.</p><p>And it did. “That’s the really exciting thing,” Buehler says, “because the computer simulation through AI can do what normally takes a very long time using molecular dynamics, or using finite element simulations, which are another way that engineers solve this problem, and it’s very slow as well. So, this is a whole new way of simulating how materials fail.”</p><p>How materials fail is crucial information for any engineering project, Buehler emphasizes. Materials failures such as fractures are “one of the biggest reasons for losses in any industry. For inspecting planes or trains or cars, or for roads or infrastructure, or concrete, or steel corrosion, or to understand the fracture of biological tissues such as bone, the ability to simulate fracturing with AI, and doing that quickly and very efficiently, is a real game changer.”</p><p>The improvement in speed produced by using this method is remarkable. Hsu explains that “for single simulations in molecular dynamics, it has taken several hours to run the simulations, but in this artificial intelligence prediction, it only takes 10 milliseconds to go through all the predictions from the patterns, and show how a crack forms step by step.”</p> <p>”Over the past 30 years or so there have been multiple approaches to model crack propagation in solids, but it remains a formidable and computationally expensive problem,” says Pradeep Guduru, a professor of engineering at Brown University, who was not involved in this work. “By shifting the computational expense to training a robust machine-learning algorithm, this new approach can potentially result in a quick and computationally inexpensive design tool, which is always desirable for practical applications.”</p><p>The method they developed is quite generalizable, Buehler says. “Even though in our paper we only applied it to one material with different crystal orientations, you can apply this methodology to much more complex materials.” And while they used data from atomistic simulations, the system could also be used to make predictions on the basis of experimental data such as images of a material undergoing fracturing.</p><p>“If we had a new material that we’ve never simulated before,” he says, “if we have a lot of images of the fracturing process, we can feed that data into the machine-learning model as well.” Whatever the input, simulated or experimental, the AI system essentially goes through the evolving process frame by frame, noting how each image differs from the one before in order to learn the underlying dynamics.</p><p>For example, as researchers make use of the new facilities in MIT.nano, the Institute’s facility dedicated to fabricating and testing materials at the nanoscale, vast amounts of new data about a variety of synthesized materials will be generated.</p><p>“As we have more and more high-throughput experimental techniques that can produce a lot of images very quickly, in an automated way, these kind of data sources can immediately be fed into the machine-learning model,” Buehler says. “We really think that the future will be one where we have a lot more integration between experiment and simulation, much more than we have in the past.”</p><p>The system could be applied not just to fracturing, as the team did in this initial demonstration, but to a wide variety of processes unfolding over time, he says, such as diffusion of one material into another, or corrosion processes. “Anytime where you have evolutions of physical fields, and we want to know how these fields evolve as a function of the microstructure,” he says, this method could be a boon.</p><p>The research was supported by the U.S. Office of Naval Research and the Army Research Office.</p>
The researchers ran hundreds of atom-by-atom simulations of the propagation of cracks through different kinds of layered material, to see which ones were most effective at stopping the cracks from making it all the way through the material. Shown here are a variety of simulation runs showing different outcomes. Image courtesy of M. Hsu, C. Yu and M.J. Buehler

Marshaling artificial intelligence in the fight against Covid-19 The MIT-IBM Watson AI Lab is funding 10 research projects aimed at addressing the health and economic consequences of the pandemic. Tue, 19 May 2020 14:10:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>Artificial intelligence&nbsp;could&nbsp;play a&nbsp;decisive&nbsp;role in stopping the Covid-19 pandemic. To give the technology a push, the MIT-IBM Watson AI Lab is funding 10 projects at MIT aimed at&nbsp;advancing AI’s transformative potential for society. The research will target the immediate public health and economic challenges of this moment. But it could have&nbsp;a&nbsp;lasting impact on how we evaluate and respond to risk long after the crisis has passed. The 10 research projects are highlighted&nbsp;below.</p> <p><strong>Early detection of sepsis in Covid-19 patients&nbsp;</strong></p> <p>Sepsis is a deadly complication of Covid-19, the disease caused by the new coronavirus SARS-CoV-2. About 10 percent of Covid-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive.</p> <p>Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival. Early detection can also help hospitals prioritize intensive-care resources for their sickest patients. In a project led by MIT Professor&nbsp;<a href=””>Daniela Rus</a>, researchers will develop a machine learning system to analyze images of patients’ white blood cells for signs of an activated immune response against sepsis.</p> <p><strong>Designing proteins to block SARS-CoV-2</strong></p> <p>Proteins are the basic building blocks of life, and with AI, researchers can explore and manipulate their structures to address longstanding problems. Take perishable food: The MIT-IBM Watson AI Lab&nbsp;<a href=””>recently used AI</a>&nbsp;to discover that a silk protein made by honeybees could double as a coating for quick-to-rot foods to extend their shelf life.</p> <p>In a related project led by MIT professors&nbsp;<a href=””>Benedetto Marelli</a>&nbsp;and&nbsp;<a href=””>Markus Buehler</a>, researchers will enlist the protein-folding method used in their honeybee-silk discovery to try to defeat the new coronavirus. Their goal is to design proteins able to block the virus from binding to human cells, and to synthesize and test their unique protein creations in the lab.</p> <p><strong>Saving lives while restarting the U.S. economy</strong></p> <p>Some states are reopening for business even as questions remain about how to protect those most vulnerable to the coronavirus. In a project led by MIT professors&nbsp;<a href=””>Daron Acemoglu</a>,&nbsp;<a href=””>Simon Johnson</a>&nbsp;and&nbsp;<a href=””>Asu Ozdaglar</a>&nbsp;will model the effects of targeted lockdowns on the economy and public health.</p> <p>In a&nbsp;<a href=””>recent working paper</a>&nbsp;co-authored by Acemoglu,&nbsp;Victor Chernozhukov, Ivan Werning, and Michael Whinston,&nbsp;MIT economists analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.</p> <p><strong>Which materials make the best face masks?</strong></p> <p>Massachusetts and six other states have ordered residents to wear face masks in public to limit the spread of coronavirus. But apart from the coveted N95 mask, which traps 95 percent of airborne particles 300 nanometers or larger, the effectiveness of many masks remains unclear due to a lack of standardized methods to evaluate them.</p> <p>In a project led by MIT Associate Professor&nbsp;<a href=””>Lydia Bourouiba</a>, researchers are developing a rigorous set of methods to measure how well homemade and medical-grade masks do at blocking the tiny droplets of saliva and mucus expelled during normal breathing, coughs, or sneezes. The researchers will test materials worn alone and together, and in a variety of configurations and environmental conditions. Their methods and measurements will determine how&nbsp;well materials protect mask wearers and the people around them.</p> <p><strong>Treating Covid-19 with repurposed drugs</strong></p> <p>As Covid-19’s global death toll mounts, researchers are racing to find a cure among already-approved drugs. Machine learning can expedite screening by letting researchers quickly predict if promising candidates can hit their target.</p> <p>In a project led by MIT Assistant Professor&nbsp;<a href=””>Rafael Gomez-Bombarelli</a>, researchers will represent molecules in three dimensions to see if this added spatial information can help to identify drugs most likely to be effective against the disease. They will use NASA’s Ames and U.S. Department of Energy’s NSERC supercomputers to further speed the screening process.</p> <p><strong>A privacy-first approach to automated contact tracing</strong></p> <p>Smartphone data can help limit the spread of Covid-19 by identifying people who have come into contact with someone infected with the virus, and thus may have caught the infection themselves. But automated contact tracing also carries serious privacy risks.</p> <p>In&nbsp;<a href=””>collaboration</a>&nbsp;with MIT Lincoln Laboratory and others, MIT researchers&nbsp;<a href=””>Ronald Rivest</a>&nbsp;and&nbsp;<a href=””>Daniel Weitzner</a>&nbsp;will use encrypted Bluetooth data to ensure personally identifiable information remains anonymous and secure.</p> <p><strong>Overcoming manufacturing and supply hurdles to provide global access to a coronavirus vaccine</strong></p> <p>A vaccine against SARS-CoV-2 would be a crucial turning point in the fight against Covid-19. Yet, its potential impact will be determined by the ability to rapidly and equitably distribute billions of doses globally.&nbsp;This is an unprecedented challenge in biomanufacturing.&nbsp;</p> <p>In a project led by MIT professors&nbsp;<a href=””>Anthony Sinskey</a>&nbsp;and&nbsp;<a href=””>Stacy Springs</a>, researchers will build data-driven statistical models to evaluate tradeoffs in scaling the manufacture and supply of vaccine candidates. Questions include how much production capacity will need to be added, the impact of centralized versus distributed operations, and how to design strategies for&nbsp;fair vaccine distribution. The goal is to give decision-makers the evidence&nbsp;needed&nbsp;to cost-effectively&nbsp;achieve&nbsp;global&nbsp;access.</p> <p><strong>Leveraging electronic medical records to find a treatment for Covid-19</strong></p> <p>Developed as a treatment for Ebola, the anti-viral drug remdesivir is now in clinical trials in the United States as a treatment for Covid-19. Similar efforts to repurpose already-approved drugs to treat or prevent the disease are underway.</p> <p>In a project led by MIT professors&nbsp;<a href=””>Roy Welsch</a>&nbsp;and <a href=””>Stan Finkelstein</a>, researchers will use statistics, machine learning, and simulated clinical drug trials to find and test already-approved drugs as potential therapeutics against Covid-19. Researchers will sift through millions of electronic health records and medical claims for signals indicating that drugs used to fight chronic conditions like hypertension, diabetes, and gastric influx might also work against Covid-19 and other diseases.</p> <p><strong>Finding better ways to treat Covid-19 patients on ventilators&nbsp;</strong></p> <p>Troubled breathing from acute respiratory distress syndrome is one of the complications that brings Covid-19 patients to the ICU. There, life-saving machines help patients breathe by mechanically pumping oxygen into the lungs. But even as towns and cities lower their Covid-19 infections through social distancing, there remains a national shortage of mechanical ventilators and serious health risks of ventilation itself.</p> <p>In collaboration with IBM researchers Zach Shahn and Daby Sow, MIT researchers&nbsp;<a href=””>Li-Wei Lehman</a>&nbsp;and&nbsp;<a href=””>Roger Mark</a>&nbsp;will develop an AI tool to help doctors find better ventilator settings for Covid-19 patients and decide how long to keep them on a machine. Shortened ventilator use can limit lung damage while freeing up machines for others.&nbsp;To build their models, researchers will draw on data from intensive-care patients with acute respiratory distress syndrome, as well as Covid-19 patients at a local Boston hospital.</p><p><strong>Returning to normal via targeted lockdowns, personalized treatments, and mass testing</strong></p> <p>In a few short months, Covid-19 has devastated towns and cities around the world. Researchers are now piecing together the data to understand how government policies can limit new infections and deaths and how targeted policies might protect the most vulnerable.</p> <p>In a project led by MIT Professor&nbsp;<a href=””>Dimitris Bertsimas</a>, researchers will study the effects of lockdowns and other measures meant to reduce new infections and deaths and prevent the health-care system from being swamped. In a second phase of the project, they will develop machine learning models to predict how vulnerable a given patient is to Covid-19, and what personalized treatments might be most effective. They will also develop an inexpensive, spectroscopy-based test for Covid-19 that can deliver results in minutes and pave the way for mass testing. The project will draw on clinical data from four hospitals in the United States and Europe, including Codogno Hospital, which reported Italy’s first infection.</p>
White blood cells are among the first to mount an attack against sepsis, a deadly complication of Covid-19. In a project led by MIT Professor Daniela Rus, researchers are developing a machine learning system to detect an activated immune response to sepsis which could lead to earlier, more aggressive treatment. Here, a white blood cell attacks malaria. Image courtesy of the Koch Institute at MIT and Ragon Institute of MGH, MIT and Harvard.

Visualizing the world beyond the frame Researchers test how far artificial intelligence models can go in dreaming up varied poses and colors of objects and animals in photos. Wed, 06 May 2020 15:00:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>Most firetrucks come in red, but it’s not hard to picture one in blue. Computers aren’t nearly as creative.</p> <p>Their understanding of the world is colored, often literally, by the data they’ve trained on. If all they’ve ever seen are pictures of red fire trucks, they have trouble drawing anything else.&nbsp;</p> <p>To give computer vision models a fuller, more imaginative view of the world, researchers have tried feeding them more varied images. Some have tried&nbsp;<a href=”″>shooting objects</a>&nbsp;from odd angles, and in unusual positions, to better convey their real-world complexity. Others have asked the models to generate pictures of their own, using a form of artificial intelligence called GANs, or generative adversarial networks. In both cases, the aim is to fill in the gaps of image datasets to better reflect the three-dimensional world and make face- and object-recognition models less biased.</p> <p>In&nbsp;<a href=””>a new study</a>&nbsp;at the International Conference on Learning Representations, MIT researchers propose a kind of creativity test to see how far GANs can go in riffing on a given image. They “steer” the model into the subject of the photo and ask it to draw objects and animals close up, in bright light, rotated in space, or in different colors.</p> <p>The model’s creations vary in subtle, sometimes surprising ways. And those variations, it turns out, closely track how creative human photographers were in framing the scenes in front of their lens. Those biases are baked into the underlying dataset, and the steering method proposed in the study is meant to make those limitations visible.&nbsp;</p> <p>“Latent space is where the DNA of an image lies,” says study co-author <a href=”” target=”_self”>Ali Jahanian</a>, a research scientist at MIT. “We show that you can steer into this abstract space and control what properties you want the GAN to express — up to a point. We find that a GAN’s creativity is limited by the diversity of images it learns from.” Jahanian is joined on the study by co-author Lucy Chai, a PhD student at MIT, and senior author&nbsp;<a href=””>Phillip Isola</a>, the Bonnie and Marty (1964) Tenenbaum CD Assistant Professor of Electrical Engineering and Computer Science.</p> <p>The researchers applied their method to GANs that had already been trained on ImageNet’s 14 million photos. They then measured how far the models could go in transforming different classes of animals, objects, and scenes. The level of artistic risk-taking, they found, varied widely by the type of subject the GAN was trying to manipulate.&nbsp;</p> <p>For example, a rising hot air balloon generated more striking poses than, say, a rotated pizza. The same was true for zooming out on a Persian cat rather than a robin, with the cat melting into a pile of fur the farther it recedes from the viewer while the bird stays virtually unchanged. The model happily turned a car blue, and a jellyfish red, they found, but it refused to draw a goldfinch or firetruck in anything but their standard-issue colors.&nbsp;</p> <p>The GANs also seemed astonishingly attuned to some landscapes. When the researchers bumped up the brightness on a set of mountain photos, the model whimsically added fiery eruptions to the volcano, but not a geologically older, dormant relative in the Alps. It’s as if the GANs picked up on the lighting changes as day slips into night, but seemed to understand that only volcanos grow brighter at night.</p> <p>The study is a reminder of just how deeply the outputs of deep learning models hinge on their data inputs, researchers say. GANs have caught the attention of intelligence researchers for their ability to extrapolate from data, and visualize the world in new and inventive ways.&nbsp;</p> <p>They can take a headshot and transform it into a Renaissance-style portrait or favorite celebrity. But though GANs are capable of learning surprising details on their own, like how to divide a landscape into&nbsp;<a href=””>clouds and trees</a>, or generate images that&nbsp;<a href=”″>stick in people’s minds</a>, they are still mostly slaves to data. Their creations reflect the biases of thousands of photographers, both in what they’ve chosen to shoot and how they framed their subject.</p> <p>“What I like about this work is it’s poking at representations the GAN has learned, and pushing it to reveal why it made those decisions,” says Jaakko Lehtinen, a professor at Finland’s Aaalto University and a research scientist at NVIDIA who was not involved in the study. “GANs are incredible, and can learn all kinds of things about the physical world, but they still can’t represent images in physically meaningful ways, as humans can.”</p>
MIT researchers have developed a kind of creativity test for generative models to see how far they can go in visualizing objects in photos from various angles and in different colors. The tool shows how much the model’s imagination depends on the variety of images it has seen. Image courtesy of the researchers.

Study finds stronger links between automation and inequality Job-replacing tech has directly driven the income gap since the late 1980s, economists report. Tue, 05 May 2020 23:59:59 -0400 Peter Dizikes | MIT News Office <p><em>This is part 3 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu.&nbsp;</em></p><p>Modern technology affects different workers in different ways. In some white-collar jobs — designer, engineer — people become more productive with sophisticated software at their side. In other cases, forms of automation, from robots to phone-answering systems, have simply replaced factory workers, receptionists, and many other kinds of employees.</p><p>Now a new study co-authored by an MIT economist suggests automation has a bigger impact on the labor market and income inequality than previous research would indicate — and identifies the year 1987 as a key inflection point in this process, the moment when jobs lost to automation stopped being replaced by an equal number of similar workplace opportunities.</p><p>“Automation is critical for understanding inequality dynamics,” says MIT economist Daron Acemoglu, co-author of a newly published paper detailing the findings.</p><p>Within industries adopting automation, the study shows, the average “displacement” (or job loss) from 1947-1987 was 17 percent of jobs, while the average “reinstatement” (new opportunities) was 19 percent. But from 1987-2016, displacement was 16 percent, while reinstatement was just 10 percent. In short, those factory positions or phone-answering jobs are not coming back.</p><p>“A lot of the new job opportunities that technology brought from the 1960s to the 1980s benefitted low-skill workers,” Acemoglu adds. “But from the 1980s, and especially in the 1990s and 2000s, there’s a double whammy for low-skill workers: They’re hurt by displacement, and the new tasks that are coming, are coming slower and benefitting high-skill workers.”</p><p>The new paper, “Unpacking Skill Bias: Automation and New Tasks,” will appear in the May issue of the <em>American Economic Association: Papers and Proceedings</em>. The authors are Acemoglu, who is an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.</p><p><strong>Low-skill workers: Moving backward</strong></p><p>The new paper is one of several studies Acemoglu and Restrepo have conducted recently examining the effects of robots and automation in the workplace. In a just-published paper, they <a href=”″>concluded that across the U.S. from 1993 to 2007, each new robot replaced 3.3 jobs</a>.</p><p>In still another new paper, Acemoglu and Restrepo examined French industry from 2010 to 2015. They <a href=”″>found that firms that quickly adopted robots became more productive and hired more workers, while their competitors fell behind and shed workers — with jobs again being reduced overall</a>.</p><p>In the current study, Acemoglu and Restrepo construct a model of technology’s effects on the labor market, while testing the model’s strength by using empirical data from 44 relevant industries. (The study uses U.S. Census statistics on employment and wages, as well as economic data from the Bureau of Economic Analysis and the Bureau of Labor Studies, among other sources.)</p><p>The result is an alternative to the standard economic modeling in the field, which has emphasized the idea of “skill-biased” technological change — meaning that technology tends to benefit select high-skilled workers more than low-skill workers, helping the wages of high-skilled workers more, while the value of other workers stagnates. Think again of highly trained engineers who use new software to finish more projects more quickly: They become more productive and valuable, while workers lacking synergy with new technology are comparatively less valued. &nbsp;</p><p>However, Acemoglu and Restrepo think even this scenario, with the prosperity gap it implies, is still too benign. Where automation occurs, lower-skill workers are not just failing to make gains; they are actively pushed backward financially. Moreover, &nbsp;Acemoglu and Restrepo note, the standard model of skill-biased change does not fully account for this dynamic; it estimates that productivity gains and real (inflation-adjusted) wages of workers should be higher than they actually are.</p><p>More specifically, the standard model implies an estimate of about 2 percent annual growth in productivity since 1963, whereas annual productivity gains have been about 1.2 percent; it also estimates wage growth for low-skill workers of about 1 percent per year, whereas real wages for low-skill workers have actually dropped since the 1970s.</p><p>“Productivity growth has been lackluster, and real wages have fallen,” Acemoglu says. “Automation accounts for both of those.” Moreover, he adds, “Demand for skills has gone down almost exclusely in industries that have seen a lot of automation.”</p><p><strong>Why “so-so technologies” are so, so bad</strong></p><p>Indeed, Acemoglu says, automation is a special case within the larger set of technological changes in the workplace. As he puts it, automation “is different than garden-variety skill-biased technological change,” because it can replace jobs without adding much productivity to the economy.</p><p>Think of a self-checkout system in your supermarket or pharmacy: It reduces labor costs without making the task more efficient. The difference is the work is done by you, not paid employees. These kinds of systems are what Acemoglu and Restrepo have termed “so-so technologies,” because of the minimal value they offer.</p><p>“So-so technologies are not really doing a fantastic job, nobody’s enthusiastic about going one-by-one through their items at checkout, and nobody likes it when the airline they’re calling puts them through automated menus,” Acemoglu says. “So-so technologies are cost-saving devices for firms that just reduce their costs a little bit but don’t increase productivity by much. They create the usual displacement effect but don’t benefit other workers that much, and firms have no reason to hire more workers or pay other workers more.”</p><p>To be sure, not all automation resembles self-checkout systems, which were not around in 1987. Automation at that time consisted more of printed office records being converted into databases, or machinery being added to sectors like textiles and furniture-making. Robots became more commonly added to heavy industrial manufacturing in the 1990s. Automation is a suite of technologies, continuing today with software and AI, which are inherently worker-displacing.</p><p>“Displacement is really the center of our theory,” Acemoglu says. “And it has grimmer implications, because wage inequality is associated with disruptive changes for workers. It’s a much more Luddite explanation.”</p><p>After all, the Luddites — British textile mill workers who destroyed machinery in the 1810s — may be synonymous with technophobia, but their actions were motivated by economic concerns; they knew machines were replacing their jobs. That same displacement continues today, although, Acemoglu contends, the net negative consequences of technology on jobs is not inevitable. We could, perhaps, find more ways to produce job-enhancing technologies, rather than job-replacing innovations.</p><p>“It’s not all doom and gloom,” says Acemoglu. “There is nothing that says technology is all bad for workers. It is the choice we make about the direction to develop technology that is critical.”</p>
New research by MIT economist Daron Acemoglu shows that since 1987, automation has taken away jobs from lower-skill workers without being replaced by an equivalent number of labor-market opportunities. Image: Stock image edited by MIT News

Robots help some firms, even while workers across industries struggle Study finds manufacturing companies that are quick to automate can thrive, but overall employment drops. Tue, 05 May 2020 00:00:00 -0400 Peter Dizikes | MIT News Office <p><em>This is part 2 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu.&nbsp;</em></p><p>Overall, adding robots to manufacturing reduces jobs — <a href=”″>by more than three per robot, in fact</a>. But a new study co-authored by an MIT professor reveals an important pattern: Firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly.</p><p>The study, by MIT economist Daron Acemoglu, examines the introduction of robots to French manufacturing in recent decades, illuminating the business dynamics and labor implications in granular detail.</p><p>“When you look at use of robots at the firm level, it is really interesting because there is an additional dimension,” says Acemoglu. “We know firms are adopting robots in order to reduce their costs, so it is quite plausible that firms adopting robots early are going to expand at the expense of their competitors whose costs are not going down. And that’s exactly what we find.”</p><p>Indeed, as the study shows, a 20 percentage point increase in robot use in manufacturing from 2010 to 2015 led to a 3.2 percent decline in industry-wide employment. And yet, for firms adopting robots during that timespan, employee hours worked rose by 10.9 percent, and wages rose modestly as well.</p><p>A new paper detailing the study, “Competing with Robots: Firm-Level Evidence from France,” will appear in the May issue of the <em>American Economic Association: Papers and Proceedings</em>. The authors are Acemoglu, who is an Institute Professor at MIT; Clair Lelarge, a senior research economist at the Banque de France and the Center for Economic Policy Research; and Pascual Restrepo Phd ’16, an assistant professor of economics at Boston University.</p><p><strong>A French robot census</strong></p><p>To conduct the study, the scholars examined 55,390 French manufacturing firms, of which 598 purchased robots during the period from 2010 to 2015. The study uses data provided by France’s Ministry of Industry, client data from French robot suppliers, customs data about imported robots, and firm-level financial data concerning sales, employment, and wages, among other things.</p><p>The 598 firms that did purchase robots, while comprising just 1 percent of manufacturing firms, accounted for about 20 percent of manufacturing production during that five-year period.</p><p>“Our paper is unique in that we have an almost comprehensive [view] of robot adoption,” Acemoglu says.</p><p>The manufacturing industries most heavily adding robots to their production lines in France were pharmaceutical companies, chemicals and plastic manufacturers, food and beverage producers, metal and machinery manufacturers, and automakers.</p><p>The industries investing least in robots from 2010 to 2015 included paper and printing, textiles and apparel manufacturing, appliance manufacturers, furniture makers, and minerals companies.</p><p>The firms that did add robots to their manufacturing processes became more productive and profitable, and the use of automation lowered their labor share — the part of their income going to workers — between roughly 4 and 6 percentage points. However, because their investments in technology fueled more growth and more market share, they added more workers overall.</p><p>By contrast, the firms that did not add robots saw no change in the labor share, and for every 10 percentage point increase in robot adoption by their competitors, these firms saw their own employment drop 2.5 percent. Essentially, the firms not investing in technology were losing ground to their competitors.</p><p>This dynamic — job growth at robot-adopting firms, but job losses overall — fits with another finding Acemoglu and Restrepo made in a separate paper about the effects of robots on employment in the U.S. There, the economists found that each robot added to the work force essentially eliminated 3.3 jobs nationally.</p><p>“Looking at the result, you might think [at first] it’s the opposite of the U.S. result, where the robot adoption goes hand in hand with destruction of jobs, whereas in France, robot-adopting firms are expanding their employment,” Acemoglu says. “But that’s only because they’re expanding at the expense of their competitors. What we show is that when we add the indirect effect on those competitors, the overall effect is negative and comparable to what we find the in the U.S.”</p><p><strong>Superstar firms and the labor share issue</strong></p><p>The competitive dynamics the researchers found in France resemble those in another high-profile piece of economics research recently published by MIT professors. In a recent paper, MIT economists David Autor and John Van Reenen, along with three co-authors, published <a href=”″>evidence</a> indicating the decline in the labor share in the U.S. as a whole was driven by gains made by “superstar firms,” which find ways to lower their labor share and gain market power.</p><p>While those elite firms may hire more workers and even pay relatively well as they grow, labor share declines in their industries, overall.</p><p>“It’s very complementary,” Acemoglu observes about the work of Autor and Van Reenen. However, he notes, “A slight difference is that superstar firms [in the work of Autor and Van Reenen, in the U.S.] could come from many different sources. By having this individual firm-level technology data, we are able to show that a lot of this is about automation.”</p><p>So, while economists have offered many possible explanations for the decline of the labor share generally — including technology, tax policy, changes in labor market institutions, and more — Acemoglu suspects technology, and automation specifically, is the prime candidate, certainly in France.</p><p>“A big part of the [economic] literature now on technology, globalization, labor market institutions, is turning to the question of what explains the decline in the labor share,” Acemoglu says. “Many of those are reasonably interesting hypotheses, but in France it’s only the firms that adopt robots — and they are very large firms — that are reducing their labor share, and that’s what accounts for the entirety of the decline in the labor share in French manufacturing. This really emphasizes that automation, and in particular robots, is a critical part in understanding what’s going on.”</p>
A new study co-authored by an MIT professor shows firms that move quickly to use robots tend to add workers to their payroll, while industry job losses are more concentrated in firms that make this change more slowly. Image: Stock photo

How many jobs do robots really replace? MIT economist Daron Acemoglu’s new research puts a number on the job costs of automation. Mon, 04 May 2020 11:39:18 -0400 Peter Dizikes | MIT News Office <p><em>This is part 1 of a three-part series examining the effects of robots and automation on employment, based on new research from economist and Institute Professor Daron Acemoglu. &nbsp;</em></p><p>In many parts of the U.S., robots have been replacing workers over the last few decades. But to what extent, really? Some technologists have forecast that automation will lead to a future without work, while other observers have been more skeptical about such scenarios.</p><p>Now a study co-authored by an MIT professor puts firm numbers on the trend, finding a very real impact — although one that falls well short of a robot takeover. The study also finds that in the U.S., the impact of robots varies widely by industry and region, and may play a notable role in exacerbating income inequality.</p><p>“We find fairly major negative employment effects,” MIT economist Daron Acemoglu says, although he notes that the impact of the trend can be overstated.</p><p>From 1990 to 2007, the study shows, adding one additional robot per 1,000 workers reduced the national employment-to-population ratio by about 0.2 percent, with some areas of the U.S. affected far more than others.</p><p>This means each additional robot added in manufacturing replaced about 3.3 workers nationally, on average.</p><p>That increased use of robots in the workplace also lowered wages by roughly 0.4 percent during the same time period.</p><p>“We find negative wage effects, that workers are losing in terms of real wages in more affected areas, because robots are pretty good at competing against them,” Acemoglu says.</p><p>The paper, “Robots and Jobs: Evidence from U.S. Labor Markets,” appears in advance online form in the <em>Journal of Political Economy</em>. The authors are Acemoglu and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.</p><p><strong>Displaced in Detroit</strong></p><p>To conduct the study, Acemoglu and Restrepo used data on 19 industries, compiled by the International Federation of Robotics (IFR), a Frankfurt-based industry group that keeps detailed statistics on robot deployments worldwide. The scholars combined that with U.S.-based data on population, employment, business, and wages, from the U.S. Census Bureau, the Bureau of Economic Analysis, and the Bureau of Labor Statistics, among other sources.</p><p>The researchers also compared robot deployment in the U.S. to that of other countries, finding it lags behind that of Europe. From 1993 to 2007, U.S. firms actually did introduce almost exactly one new robot per 1,000 workers; in Europe, firms introduced 1.6 new robots per 1,000 workers.</p><p>“Even though the U.S. is a technologically very advanced economy, in terms of industrial robots’ production and usage and innovation, it’s behind many other advanced economies,” Acemoglu says.</p><p>In the U.S., four manufacturing industries account for 70 percent of robots: automakers (38 percent of robots in use), electronics (15 percent), the plastics and chemical industry (10 percent), and metals manufacturers (7 percent).</p><p>Across the U.S., the study analyzed the impact of robots in 722 commuting zones in the continental U.S. — essentially metropolitan areas — and found considerable geographic variation in how intensively robots are utilized.</p><p>Given industry trends in robot deployment, the area of the country most affected is the seat of the automobile industry. Michigan has the highest concentration of robots in the workplace, with employment in Detroit, Lansing, and Saginaw affected more than anywhere else in the country.</p><p>“Different industries have different footprints in different places in the U.S.,” Acemoglu observes. “The place where the robot issue is most apparent is Detroit. Whatever happens to automobile manufacturing has a much greater impact on the Detroit area [than elsewhere].”</p><p>In commuting zones where robots were added to the workforce, each robot replaces about 6.6 jobs locally, the researchers found. However, in a subtle twist, adding robots in manufacturing benefits people in other industries and other areas of the country — by lowering the cost of goods, among other things. These national economic benefits are the reason the researchers calculated that adding one robot replaces 3.3 jobs for the country as a whole.</p><p><strong>The inequality issue</strong></p><p>In conducting the study, Acemoglu and Restrepo went to considerable lengths to see if the employment trends in robot-heavy areas might have been caused by other factors, such as trade policy, but they found no complicating empirical effects.</p><p>The study does suggest, however, that robots have a direct influence on income inequality. The manufacturing jobs they replace come from parts of the workforce without many other good employment options; as a result, there is a direct connection between automation in robot-using industries and sagging incomes among blue-collar workers.</p><p>“There are major distributional implications,” Acemoglu says. When robots are added to manufacturing plants, “The burden falls on the low-skill and especially middle-skill workers. That’s really an important part of our overall research [on robots], that automation actually is a much bigger part of the technological factors that have contributed to rising inequality over the last 30 years.”</p><p>So while claims about machines wiping out human work entirely may be overstated, the research by Acemoglu and Restrepo shows that the robot effect is a very real one in manufacturing, with significant social implications.</p><p>“It certainly won’t give any support to those who think robots are going to take all of our jobs,” Acemoglu says. “But it does imply that automation is a real force to be grappled with.”</p>
MIT professor Daron Acemoglu is co-author of a new study showing that each robot added to the workforce has the effect of replacing 3.3 jobs across the U.S. Image: Stock image edited by MIT News

A foolproof way to shrink deep learning models ​Researchers unveil a pruning algorithm to make artificial intelligence applications run faster. Thu, 30 Apr 2020 14:10:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>As more artificial intelligence applications move to smartphones, deep learning models are getting smaller to allow apps to run faster and save battery power. Now, MIT researchers have a new and better way to compress models.&nbsp;</p> <p>It’s so simple that they unveiled it&nbsp;<a href=”″>in a tweet</a>&nbsp;last month: Train the model, prune its weakest connections, retrain the model at its fast, early training rate, and repeat, until the model is as tiny as you want.&nbsp;</p> <p>“That’s it,” says&nbsp;<a href=””>Alex Renda</a>, a PhD student at MIT. “The standard things people do to prune their models are crazy complicated.”&nbsp;</p> <p>Renda discussed the technique when the International Conference of Learning Representations (ICLR) convened remotely this month. Renda is a co-author of the work with&nbsp;<a href=””>Jonathan Frankle</a>, a fellow PhD student in MIT’s&nbsp;<a href=””>Department of Electrical Engineering and Computer Science</a> (EECS), and&nbsp;<a href=””>Michael Carbin</a>, an assistant professor of electrical engineering and computer science — all members of the&nbsp;<a href=””>Computer Science and Artificial Science Laboratory.</a>&nbsp;&nbsp;</p> <p>The search for a better compression technique grew out of Frankle and Carbin’s&nbsp;award-winning&nbsp;<a href=””>Lottery Ticket Hypothesis</a>&nbsp;paper at ICLR last&nbsp;year.&nbsp;They showed that a deep neural network could perform with only one-tenth the number of connections if the right subnetwork was found early in training. Their revelation came as demand for computing power and energy to train ever larger deep learning models was increasing exponentially, a trend that continues to this day. Costs of that growth include a rise in planet-warming carbon emissions and a potential drop in innovation as researchers not affiliated with big tech companies compete for scarce computing resources. Everyday users are affected, too. Big AI models eat up mobile-phone bandwidth and battery power.</p> <p>But at a colleague’s suggestion, Frankle decided to see what lessons it might hold for pruning, a set of techniques for reducing the size of a neural network by removing unnecessary connections or neurons. Pruning algorithms had been around for decades, but the field saw a resurgence after the breakout success of neural networks at classifying images in the <a href=””>ImageNet competition</a>. As models got bigger, with researchers adding on layers of artificial neurons to boost performance, others proposed techniques for whittling them down.&nbsp;</p> <p><a href=””>Song Han</a>,&nbsp;now an&nbsp;assistant professor at MIT, was one pioneer. Building on a series of influential papers,&nbsp;Han unveiled a pruning algorithm he called AMC, or&nbsp;<a href=””>AutoML for model compression</a>,&nbsp;that’s still the industry standard. Under Han’s technique, redundant neurons and connections are automatically removed,&nbsp;and the model is retrained to restore its initial accuracy.&nbsp;</p> <p>In response to Han’s work, Frankle recently suggested in an&nbsp;<a href=”″>unpublished paper</a>&nbsp;that results could be further improved by rewinding the smaller, pruned model to its initial parameters, or weights, and retraining the smaller model at its faster, initial rate.&nbsp;</p> <p>In the current ICLR study, the researchers realized that the model could simply be rewound to its early training rate without fiddling with any parameters. In any pruning regimen, the tinier a model gets, the less accurate it becomes. But when the researchers compared this new method to Han’s AMC or Frankle’s weight-rewinding methods, it performed better no matter how much the model shrank.&nbsp;</p> <p>It’s unclear why the pruning technique works as well as it does. The researchers say they will leave that question for others to answer. As for those who wish to try it, the algorithm is as easy to implement as other pruning methods, without time-consuming tuning, the researchers say.&nbsp;</p> <p>“It’s the pruning algorithm from the ‘Book,’” says Frankle. “It’s clear, generic, and drop-dead simple.”</p> <p>Han, for his part, has now partly shifted focus from compression AI models to channeling AI to design small, efficient models from the start. His newest method,&nbsp;<a href=”″ target=”_self”>Once for All</a>, also debuts at ICLR. Of the new learning rate method, he says: “I’m happy to see new&nbsp;pruning and retraining&nbsp;techniques evolve, giving more people access to high-performing AI applications.”&nbsp;</p> <p>Support for the study came from <span>the Defense Advanced Research Projects Agency</span>, Google, MIT-IBM Watson AI Lab, MIT Quest for Intelligence, and the U.S. Office of Naval Research.</p>
MIT researchers have proposed a technique for shrinking deep learning models that they say is simpler and produces more accurate results than state-of-the-art methods. It works by retraining the smaller, pruned model at its faster, initial learning rate. Image: Alex Renda

Automating the search for entirely new “curiosity” algorithms Researchers show that computers can “write” algorithms that adapt to radically different environments better than algorithms designed by humans. Tue, 28 Apr 2020 09:00:00 -0400 Kim Martineau | MIT Quest for Intelligence <p>Driven by an innate curiosity, children pick up new skills as they explore the world and learn from their experience. Computers, by contrast, often get stuck when thrown into new environments.</p> <p>To get around this, engineers have tried encoding simple forms of curiosity into their algorithms with the hope that an agent pushed to explore will learn about its environment more effectively. An agent with a child’s curiosity might go from learning to pick up, manipulate, and throw objects to understanding the pull of gravity, a realization that could dramatically accelerate its ability to learn many other things.&nbsp;</p> <p>Engineers have discovered many ways of encoding curious exploration&nbsp;into machine learning algorithms. A research team at MIT wondered if a computer could do better, based on a long history of enlisting computers in the search for new algorithms.&nbsp;</p> <p>In recent years, the design of deep neural networks, algorithms that search for solutions by adjusting numeric parameters, has been automated with software like Google’s AutoML and auto-sklearn in Python. That’s made it easier for non-experts to develop AI applications. But while deep nets excel at specific tasks, they have trouble generalizing to new situations. Algorithms expressed in code, in a high-level programming language, by contrast, have the capacity to transfer knowledge across different tasks and environments.&nbsp;</p> <p>“Algorithms designed by humans are very general,” says study co-author&nbsp;<a href=””>Ferran Alet</a>, a graduate student in MIT’s Department of Electrical Engineering and Computer Science and Computer Science and Artificial Intelligence Laboratory (CSAIL). “We were inspired to use AI to find algorithms with&nbsp;curiosity strategies that can adapt to a range of environments.”</p> <p>The researchers created a “meta-learning” algorithm that generated 52,000 exploration algorithms. They found that the top two were entirely new — seemingly too obvious or counterintuitive for a human to have proposed. Both algorithms generated exploration behavior that substantially improved learning in a range of simulated tasks, from navigating a two-dimensional grid based on images to making a robotic ant walk. Because the meta-learning process generates high-level computer code as output, both algorithms can be dissected to peer inside their decision-making processes.</p> <p>The paper’s senior authors are&nbsp;<a href=””>Leslie Kaelbling</a>&nbsp;and&nbsp;<a href=””>Tomás Lozano-Pérez</a>, both professors of computer science and electrical engineering at MIT. The work will be presented at the virtual <a href=””>International Conference on Learning Representations</a> later this month.&nbsp;</p> <p>The paper received praise from researchers not involved in the work. “The use of program search to discover a better intrinsic reward is very creative,” says Quoc Le, a principal scientist at Google who has helped pioneer computer-aided design of deep learning models. “I like this idea a lot, especially since the programs are interpretable.”</p> <p>The researchers compare their automated algorithm design process to writing sentences with a limited number of words. They started by choosing a set of basic building blocks to define their exploration algorithms. After studying other curiosity algorithms for inspiration, they picked nearly three dozen high-level operations, including basic programs and deep learning models, to guide the agent to do things like remember previous inputs, compare current and past inputs, and use learning methods to change its own modules. The computer then combined up to seven operations at a time to create computation graphs describing 52,000 algorithms.&nbsp;</p> <p>Even with a fast computer, testing them all would have taken decades. So, instead, the researchers limited their search by first ruling out algorithms predicted to perform poorly, based on their code structure alone. Then, they tested their most promising candidates on a basic grid-navigation task requiring substantial exploration but minimal computation. If the candidate did well, its performance became the new benchmark, eliminating even more candidates.&nbsp;</p> <p>Four machines searched over 10 hours to find the best algorithms. More than 99 percent were junk, but about a hundred were sensible, high-performing algorithms. Remarkably, the top 16 were both novel and useful, performing as well as, or better than, human-designed algorithms at a range of other virtual tasks, from landing a moon rover to raising a robotic arm and moving an ant-like robot in a physical simulation.&nbsp;</p> <p>All 16 algorithms shared two basic exploration functions.&nbsp;</p> <p>In the first, the agent is rewarded for&nbsp;visiting new places where it has a greater chance of making a new kind of move.&nbsp;In the second, the agent is also rewarded for visiting new places, but in a more nuanced way: One neural network learns to&nbsp;predict the future state while a second recalls the past, and then tries to predict the present by predicting the past from the future. If this prediction is erroneous it rewards itself, as it is a sign that it discovered something it didn’t know before.&nbsp;The second algorithm was so counterintuitive it took the researchers time to figure out.&nbsp;</p> <p>“Our biases often&nbsp;prevent us from trying very novel ideas,” says Alet. “But computers don’t care. They try, and see what works, and sometimes we get great unexpected results.”</p><p>More researchers are turning to machine learning to design better machine learning algorithms, a field known as AutoML. At Google, Le and his colleagues recently unveiled a new algorithm-discovery tool called Auto-ML Zero.&nbsp;(Its name is a play on Google’s AutoML software for customizing deep net architectures for a given application, and Google DeepMind’s Alpha Zero, the program that can learn to play different board games by playing millions of games against itself.)&nbsp;</p><p>Their method searches through a space of algorithms made up of simpler primitive operations. But rather than look for an exploration strategy, their goal is to discover algorithms for classifying images. Both studies show the potential for humans to use machine-learning methods themselves to create novel, high-performing machine-learning algorithms.</p><p>“The algorithms we generated could be read and interpreted by humans, but to actually understand the code we had to reason through each variable and operation and how they evolve with time,” says study co-author Martin Schneider, a graduate student at MIT.&nbsp;“It’s an interesting open challenge to design algorithms and workflows that leverage the computer’s ability to evaluate lots of algorithms and our human ability to explain and improve on those ideas.”&nbsp;</p> <p>The research received support from the U.S. National Science Foundation,&nbsp;Air Force Office of Scientific Research,&nbsp;Office of Naval Research, Honda Research Institute, SUTD Temasek Laboratories, and MIT Quest for Intelligence.</p>
MIT researchers used machine learning to find entirely new algorithms for encoding exploration. Their machine-designed algorithms outperformed human-designed algorithms on the wide range of simulated tasks and environments shown here. Image courtesy of Martin Schneider.

MIT conference reveals the power of using artificial intelligence to discover new drugs Inaugural AI Powered Drug Discovery and Manufacturing Conference drew pharmaceutical companies, government regulators, and pioneering drug researchers. Mon, 27 Apr 2020 13:50:01 -0400 Ashley Belanger | MIT Schwarzman College of Computing <p>Developing drugs to combat Covid-19 is a global priority, requiring communities to come together to fight the spread of infection. At MIT, researchers with backgrounds in machine learning and life sciences are collaborating, sharing datasets and tools to develop machine learning methods that can identify novel cures for Covid-19.</p> <p>This research is an extension of a community effort launched earlier this year. In February, before the Institute de-densified as a result of the pandemic, the first-ever <a href=””>AI Powered Drug Discovery and Manufacturing Conference</a>, conceived and hosted by the <a href=””>Abdul Latif Jameel Clinic for Machine Learning in Health</a>, drew attendees including pharmaceutical industry researchers, government regulators, venture capitalists, and pioneering drug researchers. More than 180 health care companies and 29 universities developing new artificial intelligence methods used in pharmaceuticals got involved, making the conference a singular event designed to lift the mask and reveal what goes on in the process of drug discovery.</p> <p>As secretive as Silicon Valley seems, computer science and engineering students typically know what a job looks like when aspiring to join companies like Facebook or Tesla. But the global head of research and development for Janssen — the innovative pharmaceutical company owned by Johnson &amp; Johnson — said it’s often much harder for students to grasp how their work fits into drug discovery.</p> <p>“That’s a problem at the moment,” Mathai Mammen says, after addressing attendees, including MIT graduate students and postdocs, who gathered in the Samberg Conference Center in part to get a glimpse behind the scenes of companies currently working on bold ideas blending artificial intelligence with health care. Mathai, who is a graduate of the Harvard-MIT Program in Health Sciences and Technology and whose work at Theravance has brought to market five new medicines and many more on their way, is here to be part of the answer to that problem. “What the industry needs to do, is talk to students and postdocs about the sorts of interesting scientific and medical problems whose solutions can directly and profoundly benefit the health of people everywhere” he says.</p> <p>“The conference brought together research communities that rarely overlap at technical conferences,” says Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science, Jameel Clinic faculty co-lead, and one of the conference organizers. “This blend enables us to better understand open problems and opportunities in the intersection. The exciting piece for MIT students, especially for computer science and engineering students, is to see where the industry is moving and to understand how they can contribute to this changing industry, which will happen when they graduate.”</p> <p>Over two days, conference attendees snapped photographs through a packed schedule of research presentations, technical sessions, and expert panels, covering everything from discovering new therapeutic molecules with machine learning to funding AI research. Carefully curated, the conference provided a roadmap of bold tech ideas at work in health care now and traced the path to show how those tech solutions get implemented.</p> <p>At the conference, Barzilay and Jim Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering, and Jameel Clinic faculty co-lead, presented research from a study published in <em>Cell</em> where they <a href=”″>used machine learning to help identify a new drug</a> that can target antibiotic-resistant bacteria. Together with MIT researchers Tommi Jaakkola, Kevin Yang, Kyle Swanson, and the first author Jonathan Stokes, they demonstrated how blending their backgrounds can yield potential answers to combat the growing antibiotic resistance crisis.</p> <p>Collins saw the conference as an opportunity to inspire interest in antibiotic research, hoping to get the top young minds involved in battling resistance to antibiotics built up over decades of overuse and misuse, an urgent predicament in medicine that computer science students might not understand their role in solving. “I think we should take advantage of the innovation ecosystem at MIT and the fact that there are many experts here at MIT who are willing to step outside their comfort zone and get engaged in a new problem,” Collins says. “Certainly in this case, the development and discovery of novel antibiotics, is critically needed around the globe.”</p> <p>AIDM showed the power of collaboration, inviting experts from major health-care companies and relevant organizations like Merck, Bayer, Darpa, Google, Pfizer, Novartis, Amgen, the U.S. Food and Drug Administration, and Janssen. Reaching capacity for conference attendees, it also showed people are ready to pull together to get on the same page. “I think the time is right and I think the place is right,” Collins says. “I think MIT is well-positioned to be a national, if not an international leader in this space, given the excitement and engagement of our students and our position in Kendall Square.”</p> <p>A biotech hub for decades, Kendall Square has come a long way since big data came to Cambridge, Massachusetts, forever changing life science companies based here. AIDM kicked off with Institute Professor and Professor of Biology Phillip Sharp walking attendees through a brief history of AI in health care in the area. He was perhaps the person at the conference most excited for others to see the potential, as through his long career, he’s watched firsthand the history of innovation that led to this conference.</p> <p>“The bigger picture, which this conference is a major part of, is this bringing together of the life science — biologists and chemists with machine learning and artificial intelligence — it’s the future of life science,” Sharp says. “It’s clear. It will reshape how we talk about our science, how we think about solving problems, how we deal with the other parts of the process of taking insights to benefit society.”</p>
Mathai Mammen, head of R&D at the Janssen Pharmaceutical Companies of Johnson & Johnson, was the keynote speaker at the Jameel Clinic’s AI Powered Drug Discovery and Manufacturing Conference in February. Photo: Rachel Wu

Muscle signals can pilot a robot CSAIL&#039;s Conduct-A-Bot system uses muscle signals to cue a drone’s movement, enabling more natural human-robot communication. Mon, 27 Apr 2020 13:30:01 -0400 Rachel Gordon | CSAIL <p>Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication.&nbsp;</p> <p>But intuitiveness is hard to teach — especially to a machine. Looking to improve this, a team from MIT’s <a href=””>Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with a method that dials us closer to more seamless human-robot collaboration. The system, called “Conduct-A-Bot,” uses human muscle signals from wearable sensors to pilot a robot’s movement.&nbsp;</p> <p>“We envision a world in which machines help people with cognitive and physical work, and to do so, they adapt to people rather than the other way around,” says Professor Daniela Rus, director of CSAIL, deputy dean of research for the MIT Stephen A. Schwarzman College of Computing, and co-author on a paper about the system.&nbsp;</p> <p>To enable seamless teamwork between people and machines, electromyography and motion sensors are worn on the biceps, triceps, and forearms to measure muscle signals and movement. Algorithms then process the signals to detect gestures in real time, without any offline calibration or per-user training data. The system uses just two or three wearable sensors, and nothing in the environment — largely reducing the barrier to casual users interacting with robots.</p> <p>While Conduct-A-Bot could potentially be used for various scenarios, including navigating menus on electronic devices or supervising autonomous robots, for this research the team used a Parrot Bebop 2 drone, although any commercial drone could be used.</p> <p>By detecting actions like rotational gestures, clenched fists, tensed arms, and activated forearms, Conduct-A-Bot can move the drone left, right, up, down, and forward, as well as allow it to rotate and stop.&nbsp;</p> <p>If you gestured toward the right to your friend, they could likely interpret that they should move in that direction. Similarly, if you waved your hand to the left, for example, the drone would follow suit and make a left turn.&nbsp;</p> <p>In tests, the drone correctly responded to 82 percent of over 1,500 human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94 percent of cued gestures when the drone was not being controlled.</p> <p>“Understanding our gestures could help robots interpret more of the nonverbal cues that we naturally use in everyday life,” says Joseph DelPreto, lead author on the new paper. “This type of system could help make interacting with a robot more similar to interacting with another person, and make it easier for someone to start using robots without prior experience or external sensors.”&nbsp;</p> <p>This type of system could eventually target a range of applications for human-robot collaboration, including remote exploration, assistive personal robots, or manufacturing tasks like delivering objects or lifting materials.&nbsp;</p> <p>These intelligent tools are also consistent with social distancing — and could potentially open up a realm of future contactless work. For example, you can imagine machines being controlled by humans to safely clean a hospital room, or drop off medications, while letting us humans stay a safe distance.</p><p>Muscle signals can often provide information about states that are hard to observe from vision, such as joint stiffness or fatigue.&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>For example, if you watch a video of someone holding a large box, you might have difficulty guessing how much effort or force was needed — and a machine would also have difficulty gauging that from vision alone. Using muscle sensors opens up possibilities to estimate not only motion, but also the force and torque required to execute that physical trajectory.</p> <p>For the gesture vocabulary currently used to control the robot, the movements were detected as follows:&nbsp;</p> <ul> <li> <p>stiffening the upper arm to stop the robot (similar to briefly cringing when seeing something going wrong): biceps and triceps muscle signals;</p> </li> <li> <p>waving the hand left/right and up/down to move the robot sideways or vertically: forearm muscle signals (with the forearm accelerometer indicating hand orientation);</p> </li> <li> <p>fist clenching to move the robot forward: forearm muscle signals; and</p> </li> <li> <p>rotating clockwise/counterclockwise to turn the robot: forearm gyroscope.</p> </li>
</ul> <p>Machine learning classifiers detected the gestures using the wearable sensors. Unsupervised classifiers processed the muscle and motion data and clustered it in real time to learn how to separate gestures from other motions. A neural network also predicted wrist flexion or extension from forearm muscle signals.&nbsp;&nbsp;</p> <p>The system essentially calibrates itself to each person’s signals while they’re making gestures that control the robot, making it faster and easier for casual users to start interacting with robots.</p> <p>In the future, the team hopes to expand the tests to include more subjects. And while the movements for Conduct-A-Bot cover common gestures for robot motion, the researchers want to extend the vocabulary to include more continuous or user-defined gestures. Eventually, the hope is to have the robots learn from these interactions to better understand the tasks and provide more predictive assistance or increase their autonomy.&nbsp;</p> <p>“This system moves one step closer to letting us work seamlessly with robots so they can become more effective and intelligent tools for everyday tasks,” says DelPreto. “As such collaborations continue to become more accessible and pervasive, the possibilities for synergistic benefit continue to deepen.”&nbsp;</p> <p>DelPreto and Rus presented the paper virtually earlier this month at the ACM/IEEE International Conference on Human Robot Interaction.</p>
Lead author Joseph DelPreto controls a “Conduct-A-Bot” drone with his arm muscles. Photo courtesy of the researchers.

Shedding light on complex power systems Senior Research Scientist Marija Ilic is making electric energy systems future-ready. Thu, 23 Apr 2020 14:35:01 -0400 Grace Chua | Laboratory for Information and Decision Systems <p>Marija Ilic — a senior research scientist at the Laboratory for Information and Decision Systems, affiliate of the MIT Institute for Data, Systems, and Society, senior staff in MIT Lincoln Laboratory’s Energy Systems Group, and Carnegie Mellon University professor emerita — is a researcher on a mission: making electric energy systems future-ready.</p><p>Since the earliest days of streetcars and public utilities, electric power systems have had a fairly standard structure: for a given area, a few large generation plants produce and distribute electricity to customers. It is a one-directional structure, with the energy plants being the only source of power for many end users.</p><p>Today, however, electricity can be generated from many and varied sources — and move through the system in multiple directions. An electric power system may include stands of huge turbines capturing wild ocean winds, for instance. There might be solar farms of a hundred megawatts or more, or houses with solar panels on their roofs that some days make more electricity than occupants need, some days much less. And there are electric cars, their batteries hoarding stored energy overnight. Users may draw electricity from one source or another, or feed it back into the system, all at the same time. Add to that the trend toward open electricity markets, where end users like households can pick and choose the electricity services they buy depending on their needs. How should systems operators integrate all these while keeping the grid stable and ensuring power gets to where it is needed?</p><p>To explore this question, Ilic has developed a new way to model complex power systems.</p> <p>Electric power systems, even traditional ones, are complex and heterogeneous to begin with. They cover wide geographical areas and have legal and political barriers to contend with, such as state borders and energy policies. In addition, all electric power systems have inherent physical limitations. For instance, power does not flow in a set path in an electric grid, but rather along all possible paths connecting supply to demand. To maintain grid stability and quality of service, then, the system must control for the impact of interconnections: a change in supply and demand at one point in a system changes supply and demand for the other points in the system. This means there is much more complexity to manage as new sources of energy (more interconnections) with sometimes unpredictable supply (such as wind or solar power) come into play. Ultimately, however, to maintain stability and quality of service, and to balance supply and demand within the system, it comes down to a relatively simple concept: the power consumed and the rate at which it is consumed (plus whatever is lost along the way), must always equal the power produced and the rate at which it is produced.</p> <p>Using this simpler concept to manage the complexities and limitations of electric power systems, Ilic is taking a non-traditional approach: She models the systems using information about energy, power, and ramp rate (the rate at which power can increase over time) for each part of the system — distributing decision-making calculations into smaller operational chunks. Doing this <a href=”″>streamlines the model</a> but retains information about the system’s physical and temporal structure. “That’s the minimal information you need to exchange. It’s simple and technology-agnostic, but we don’t teach systems that way.”</p> <p>She believes regulatory organizations such as the Federal Energy Regulatory Commission and North American Energy Reliability Corporation should have standard protocols for such information exchanges, just as internet protocols govern how data is exchanged on the internet. “If you were to [use a standard set of] specifications like: what is your capacity, how much does it vary over time, how much energy do you need and within what power range — the system operator could integrate different sources in a much simpler way than we are doing now.”<a href=”,%20Carnegie%20Mellon.pdf” style=”text-decoration:none;”>&nbsp;</a></p> <p>Another important aspect of Ilic’s work is that her models lend themselves to controlling the system with a layer of sensor and communications technologies. This uses a framework she developed called Dynamic Monitoring and Decision Systems framework, or DyMonDS. The data-enabled decision-making concept has been tested using real data from Portugal’s Azores Islands, and since applied to real-world challenges. After so many years it appears that her new modeling approach fittingly supports DyMonDS design, including systematic use of many theoretical concepts used by the LIDS community in their research.</p> <p>One such challenge included work on Puerto Rico’s power grid. Ilic was the technical lead on a Lincoln Laboratory project on designing future architectures and software to make Puerto Rico’s electric power grid more resilient without adding much more production capacity or cost. Typically, a power grid’s generation capacity is scheduled in a simple, brute-force way, based on weather forecasts and the hottest and coldest days of the year, that doesn’t respond sensitively to real-time needs. Making such a system more resilient would mean spending a lot more on generation and transmission and distribution capacity, whereas a more dynamic system that integrates distributed microgrids could tame the cost, Ilic says: “What we are trying to do is to have systematic frameworks for embedding intelligence into small microgrids serving communities, and having them interact with large-scale power grids. People are realizing that you can make many small microgrids to serve communities rather than relying only on large scale electrical power generation.”</p> <p>Although this is one of Ilic’s most recent projects, her work on DyMonDS can be traced back four decades, to when she was a student at the University of Belgrade in the former country of Yugoslavia, which sent her to the United States to learn how to use computers to prevent blackouts.</p> <p>She ended up at Washington University in St. Louis, Missouri, studying with applied mathematician John Zaborszky, a legend in the field who was originally chief engineer of Budapest’s municipal power system before moving to the United States. (“The legend goes that in the morning he would teach courses, and in the afternoon he would go and operate Hungarian power system protection by hand.”) Under Zaborszky, a systems and control expert, Ilic learned to think in abstract terms as well as in terms of physical power systems and technologies. She became fascinated by the question of how to model, simulate, monitor, and control power systems — and that’s where she’s been ever since. (Although, she admits as she uncoils to her full height from behind her desk, her first love was actually playing basketball.)</p> <p>Ilic first arrived at MIT in 1987 to work with the late professor Fred Schweppe on connecting electricity technologies with electricity markets. She stayed on as a senior research scientist until 2002, when she moved to Carnegie Mellon University (CMU) to lead the multidisciplinary Electric Energy Systems Group there. In 2018, after her consulting work for Lincoln Lab ramped up, she retired from CMU to move back to the familiar environs of Cambridge, Massachusetts. CMU’s loss has been MIT’s gain: In fall 2019, Ilic taught a course in modeling, simulation, and control of electric energy systems, applying her work on streamlined models that use pared-down information.</p> <p>Addressing the evolving needs of electric power systems has not been a “hot” topic, historically. Traditional power systems are often seen by the academic community as legacy technology with no fundamentally new developments. And yet when new software and systems are developed to help integrate distributed energy generation and storage, commercial systems operators regard them as untested and disruptive. “I’ve always been a bit on the sidelines from mainstream power and electrical engineering because I’m interested in some of these things,” she remarks.</p> <p>However, Ilic’s work is becoming increasingly urgent. Much of today’s power system is physically very old and will need to be retired and replaced over the next decade. This presents an opportunity for innovation: the next generation of electric energy systems could be built to integrate renewable and distributed energy resources at scale — addressing the pressing challenge of climate change and making way for further progress.</p> <p>“That’s why I’m still working, even though I should be retired.” She smiles. “It supports the evolution of the system to something better.”</p>
Marija Ilic — a senior research scientist at the Laboratory for Information and Decision Systems, affiliate of the MIT Institute for Data, Systems, and Society, senior staff in MIT Lincoln Laboratory’s Energy Systems Group, and Carnegie Mellon University professor emerita — is a researcher on a mission: making electric energy systems future-ready. Photo: MIT LIDS

Reducing the carbon footprint of artificial intelligence MIT system cuts the energy required for training and running neural networks. Thu, 23 Apr 2020 13:14:52 -0400 Rob Matheson | MIT News Office <p>Artificial intelligence has become a focus of certain ethical concerns, but it also has some major sustainability issues.&nbsp;</p> <p>Last June, researchers at the University of Massachusetts at Amherst released a startling <a href=”” target=”_blank”>report</a> estimating that the amount of power required for training and searching a certain neural network architecture involves the emissions of roughly 626,000 pounds of carbon dioxide. That’s equivalent to nearly <a href=”” target=”_blank”>five times</a> the lifetime emissions of the average U.S. car, including its manufacturing.</p> <p>This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, each with different properties and computational resources.&nbsp;</p> <p>MIT researchers have developed a new automated AI system for training and running certain neural networks. Results indicate that, by improving the computational efficiency of the system in some key ways, the system can cut down the pounds of carbon emissions involved — in some cases, down to low triple digits.&nbsp;</p> <p>The researchers’ system, which they call a <a href=”” target=”_blank”>once-for-all network</a>, trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. This dramatically reduces the energy usually required to train each specialized neural network for new platforms — which can include billions of internet of things (IoT) devices. Using the system to train a computer-vision model, they estimated that the process required roughly 1/1,300 the carbon emissions compared to today’s state-of-the-art neural architecture search approaches, while reducing the inference time by 1.5-2.6 times.&nbsp;</p> <p>“The aim is smaller, greener neural networks,” says Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science. “Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.”</p> <p>The work was carried out on Satori, an efficient computing cluster donated to MIT by IBM that is capable of performing 2 quadrillion calculations per second. The paper is being presented next week at the International Conference on Learning Representations. Joining Han on the paper are four undergraduate and graduate students from EECS, MIT-IBM Watson AI Lab, and Shanghai Jiao Tong University.&nbsp;</p> <p><strong>Creating a “once-for-all” network</strong></p> <p>The researchers built the system on a recent AI advance called AutoML (for automatic machine learning), which eliminates manual network design. Neural networks automatically search massive design spaces for network architectures tailored, for instance, to specific hardware platforms. But there’s still a training efficiency issue: Each model has to be selected then trained from scratch for its platform architecture.&nbsp;</p> <p>“How do we train all those networks efficiently for such a broad spectrum of devices — from a $10 IoT device to a $600 smartphone? Given the diversity of IoT devices, the computation cost of neural architecture search will explode,” Han says.&nbsp; &nbsp;</p> <p>The researchers invented an AutoML system that trains only a single, large “once-for-all” (OFA) network that serves as a “mother” network, nesting an extremely high number of subnetworks that are sparsely activated from the mother network. OFA shares all its learned weights with all subnetworks — meaning they come essentially pretrained. Thus, each subnetwork can operate independently at inference time without retraining.&nbsp;</p> <p>The team trained an OFA convolutional neural network (CNN) — commonly used for image-processing tasks — with versatile architectural configurations, including different numbers of layers and “neurons,” diverse filter sizes, and diverse input image resolutions. Given a specific platform, the system uses the OFA as the search space to find the best subnetwork based on the accuracy and latency tradeoffs that correlate to the platform’s power and speed limits. For an IoT device, for instance, the system will find a smaller subnetwork. For smartphones, it will select larger subnetworks, but with different structures depending on individual battery lifetimes and computation resources. OFA decouples model training and architecture search, and spreads the one-time training cost across many inference hardware platforms and resource constraints.&nbsp;</p> <p>This relies on a “progressive shrinking” algorithm that efficiently trains the OFA network to support all of the subnetworks simultaneously. It starts with training the full network with the maximum size, then progressively shrinks the sizes of the network to include smaller subnetworks. Smaller subnetworks are trained with the help of large subnetworks to grow together. In the end, all of the subnetworks with different sizes are supported, allowing fast specialization based on the platform’s power and speed limits. It supports many hardware devices with zero training cost when adding a new device.<br />
&nbsp;<br />
In total, one OFA, the researchers found, can comprise more than 10 quintillion — that’s a 1 followed by 19 zeroes — architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy.&nbsp;<br />
&nbsp; &nbsp;&nbsp;<br />
“That’s a breakthrough technology,” Han says. “If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size.”</p> <p>“The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices,” says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.</p> <p>“If rapid progress in AI is to continue, we need to reduce its environmental impact,” says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. “The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.”</p>
MIT researchers have developed a new automated AI system with improved computational efficiency and a much smaller carbon footprint. The researchers’ system trains one large neural network comprising many pretrained subnetworks of different sizes that can be tailored to diverse hardware platforms without retraining. Image: MIT News, based on figures courtesy of the researchers

With lidar and artificial intelligence, road status clears up after a disaster Automated tools can help emergency managers make decisions, plan routes, and quantify road damage at city scales. Tue, 21 Apr 2020 15:20:01 -0400 Kylie Foy | Lincoln Laboratory <p>Consider the days after a hurricane strikes. Trees and debris are blocking roads, bridges are destroyed, and sections of roadway are washed out. Emergency managers soon face a bevy of questions: How can supplies get delivered to certain areas? What’s the best route for evacuating survivors? Which roads are too damaged to remain open?</p> <p>Without concrete data on the state of the road network, emergency managers often have to base their answers on incomplete information. The Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory hopes to use its airborne lidar platform, paired with artificial intelligence (AI) algorithms, to fill this information gap.&nbsp;&nbsp;</p> <p>”For a truly large-scale catastrophe, understanding the state of the transportation system as early as possible is critical,” says Chad Council, a researcher in the group. “With our particular approach, you can determine road viability, do optimal routing, and also get quantified road damage. You fly it, you run it, you’ve got everything.”</p> <p>Since the 2017 hurricane season, the team has been flying its advanced lidar platform over stricken cities and towns. Lidar works by pulsing photons down over an area and measuring the time it takes for each photon to bounce back to the sensor. These time-of-arrival data points paint a 3D “point cloud” map of the landscape — every road, tree, and building — to within about a foot of accuracy.</p> <p>To date, they’ve mapped huge swaths of the Carolinas, Florida, Texas, and all of Puerto Rico. In the immediate aftermath of hurricanes in those areas, the team manually sifted through the data to help the Federal Emergency Management Agency (FEMA) find and quantify damage to roads, among other tasks. The team’s focus now is on developing AI algorithms that can automate these processes and find ways to route around damage.</p> <p><strong>What’s the road status?</strong></p> <p>Information about the road network after a disaster comes to emergency managers in a “mosaic of different information streams,” Council says, namely satellite images, aerial photographs taken by the Civil Air Patrol, and crowdsourcing from vetted sources.</p> <p>”These various efforts for acquiring data are important because every situation is different. There might be cases when crowdsourcing is fastest, and it’s good to have redundancy. But when you consider the scale of disasters like Hurricane Maria on Puerto Rico, these various streams can be overwhelming, incomplete, and difficult to coalesce,” he says.</p> <p>During these times, lidar can act as an all-seeing eye, providing a big-picture map of an area and also granular details on road features. The laboratory’s platform is especially advanced because it uses Geiger-mode lidar, which is sensitive to a single photon. As such, its sensor can collect each of the millions of photons that trickle through openings in foliage as the system is flown overhead. This foliage can then be filtered out of the lidar map, revealing roads that would otherwise be hidden from aerial view.</p> <p>To provide the status of the road network, the lidar map is first run through a neural network. This neural network is trained to find and extract the roads, and to determine their widths. Then, AI algorithms search these roads and flag anomalies that indicate the roads are impassable. For example, a cluster of lidar points extending up and across a road is likely a downed tree. A sudden drop in the elevation is likely a hole or washed out area in a road.</p> <p>The extracted road network, with its flagged anomalies, is then merged with an OpenStreetMap of the area (an open-access map similar to Google Maps). Emergency managers can use this system to plan routes, or in other cases to identify isolated communities — those that are cut off from the road network. The system will show them the most efficient route between two specified locations, finding detours around impassable roads. Users can also specify how important it is to stay on the road; on the basis of that input, the system provides routes through parking lots or fields.&nbsp;&nbsp;</p> <p>This process, from extracting roads to finding damage to planning routes, can be applied to the data at the scale of a single neighborhood or across an entire city.</p> <p><strong>How fast and how accurate?</strong></p> <p>To gain an idea of how fast this system works, consider that in a recent test, the team flew the lidar platform, processed the data, and got AI-based analytics in 36 hours. That sortie covered an area of 250 square miles, an area about the size of Chicago, Illinois.</p> <p>But accuracy is equally as important as speed. “As we incorporate AI techniques into decision support, we’re developing metrics to characterize an algorithm’s performance,” Council says.</p> <p>For finding roads, the algorithm determines if a point in the lidar point cloud is “road” or “not road.” The team ran a performance evaluation of the algorithm against 50,000 square meters of suburban data, and the resulting ROC curve indicated that the current algorithm provided an 87 percent true positive rate (that is, correctly labeled a point as “road”), with a 20 percent false positive rate (that is, labeling a point as “road” that may not be road).&nbsp;The false positives are typically areas that geometrically look like a road but aren’t.</p> <p>”Because we have another data source for identifying the general location of roads, OpenStreetMaps, these false positives can be excluded, resulting in a highly accurate 3D point cloud representation of the road network,” says Dieter Schuldt, who has been leading the algorithm-testing efforts.</p> <p>For the algorithm that detects road damage, the team is in the process of further aggregating ground truth data to evaluate its performance. In the meantime, preliminary results have been promising. Their damage-finding algorithm recently flagged for review a potentially blocked road in Bedford, Massachusetts, which appeared to be a hole measuring 10 meters wide by 7 meters long by 1 meter deep. The town’s public works department and a site visit confirmed that construction blocked the road.</p> <p>”We actually didn’t go in expecting that this particular sortie would capture examples of blocked roads, and it was an interesting find,” says Bhavani Ananthabhotla, a contributor to this work. “With additional ground truth annotations, we hope to not only evaluate and improve performance, but also to better tailor future models to regional emergency management needs, including informing route planning and repair cost estimation.”</p> <p>The team is continuing to test, train, and tweak their algorithms to improve accuracy. Their hope is that these techniques may soon be deployed to help answer important questions during disaster recovery.</p> <p>”We picture lidar as a 3D scaffold that other data can be draped over and that can be trusted,” Council says. “The more trust, the more likely an emergency manager, and a community in general, will use it to make the best decisions they can.”</p>
In this lidar map, AI algorithms have identified roads (red). By pairing this map with OpenStreetMaps, users can plan routes around roads that are deemed impassable and can do so at the scale of an entire sortie, which here covers about 400 square miles. Image courtesy of the researchers.

Professor Daniela Rus named to White House science council CSAIL director and MIT Schwarzman College of Computing deputy dean of research will serve on the President’s Council of Advisors on Science and Technology. Tue, 21 Apr 2020 15:15:01 -0400 Adam Conner-Simons | MIT CSAIL <p>This week the White House announced that MIT Professor Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), has been selected to serve on the President’s Council of Advisors on Science and Technology (PCAST).</p> <p>The council provides advice to the White House on topics critical to U.S. security and the economy, including policy recommendations on the future of work, American leadership in science and technology, and the support of U.S. research and development.&nbsp;</p> <p>PCAST operates under the aegis of the White House Office of Science and Technology Policy (OSTP), which was established in law in 1976. However, the council has existed more informally going back to Franklin Roosevelt’s Science Advisory Board in 1933.</p> <p>“I’m grateful to be able to add my perspective as a computer scientist to this group at a time when so many issues involving AI and other aspects of computing raise important scientific and policy questions for the nation and the world,” says Rus.<br />
&nbsp;<br />
Rus is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing. Her research in robotics, artificial intelligence, and data science focuses primarily on developing the science and engineering of autonomy, with the long-term objective of enabling a future where machines are integrated into daily life to support both cognitive and physical tasks. The applications of her work are broad and include transportation, manufacturing, medicine, and urban planning.&nbsp;<br />
&nbsp;<br />
More than a dozen MIT faculty and alumni have served on PCAST during past presidential administrations. These include former MIT president Charles Vest; Institute Professors Phillip Sharp and John Deutch; Ernest Moniz, professor of physics and former U.S. Secretary of Energy; and Eric Lander, director of the Broad Institute of MIT and Harvard and professor of biology, who co-chaired PCAST during the Obama administration. Previous councils have offered advice on topics ranging from <a href=”” target=”_blank”>data privacy</a> and <a href=”” target=”_blank”>nanotechnology</a> to <a href=”” target=”_blank”>job training</a> and <a href=”” target=”_blank”>STEM education</a>.</p>
Daniela Rus is the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), deputy dean of research for the MIT Schwarzman College of Computing, and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. Photo: Jason Dorfman/MIT CSAIL

Deploying more conversational chatbots Startup Posh has created chatbots that use “conversational memory” to have more natural exchanges. Fri, 17 Apr 2020 00:00:00 -0400 Zach Winn | MIT News Office <p>The comedian Bill Burr has said he refuses to call into automated customer service lines for fear that, years later on his death bed, all he’ll be able to think about are the moments he wasted dealing with chatbots.</p><p>Indeed, the frustrating experience of trying to complete even the most straightforward task through an automated customer service line is enough to make anyone question the purpose of life.</p><p>Now the startup Posh is trying to make conversations with chatbots more natural and less maddening. It’s accomplishing this with an artificial intelligence-powered system that uses “conversational memory” to help users complete tasks.</p><p>“We noticed bots in general would take what the user said at face value, without connecting the dots of what was said before in the conversation,” says Posh co-founder and CEO Karan Kashyap ’17, SM ’17. “If you think about your conversations with humans, especially in places like banks with tellers or in customer service, what you said in the past is very important, so we focused on making bots more humanlike by giving them the ability to remember historical information in a conversation.”</p><p>Posh’s chatbots are currently used by over a dozen credit unions across voice- and text-based channels. The well-defined customer base has allowed the company to train its system on only the most relevant data, improving performance.</p><p>The founders plan to gradually partner with companies in other sectors to gather industry-specific data and expand the use of their system without compromising performance. Down the line, Kashyap and Posh co-founder and CTO Matt McEachern ’17, SM ’18 plan to provide their chatbots as a platform for developers to build on.</p><p>The expansion plans should attract businesses in a variety of sectors: Kashyap says some credit unions have successfully resolved more than 90 percent of customer calls with Posh’s platform. The company’s expansion may also help alleviate the mind-numbing experience of calling into traditional customer service lines.</p><p>“When we deploy our telephone product, there’s no notion of ‘Press one or press two,’” Kashyap explains. “There’s no dial tone menu. We just say, ‘Welcome to whatever credit union, how can I help you today?’ In a few words, you let us know. We prompt users to describe their problems via natural speech instead of waiting for menu options to be read out.”</p><p><strong>Bootstrapping better bots</strong></p><p>Kashyap and McEachern became friends while pursuing their degrees in MIT’s Department of Electrical Engineering and Computer Science. They also worked together in the same research lab at the Computer Science and Artificial Intelligence Laboratory (CSAIL).</p><p>But their relationship quickly grew outside of MIT. In 2016, the students began software consulting, in part designing chatbots for companies to handle customer inquiries around medical devices, flight booking, personal fitness, and more. Kashyap says they used their time consulting to learn about and take business risks.</p><p>“That was a great learning experience, because we got real-world experience in designing these bots using the tools that were available,” Kashyap says. “We saw the market need for a bot platform and for better bot experiences.”</p><p>From the start, the founders executed a lean business strategy that made it clear the engineering undergrads were thinking long term. Upon graduation, the founders used their savings from consulting to fund Posh’s early operations, giving themselves salaries and even hiring some contacts from MIT.</p><p>It also helped that they were accepted into the delta v accelerator, run by the Martin Trust Center for MIT Entrepreneurship, which gave them a summer of guidance and free rent. Following delta v, Posh was accepted into the DCU Fintech Innovation Center, connecting it with one of the largest credit unions in the country and netting the company another 12 months of free rent.<br />
&nbsp;</p> <p>With DCU serving as a pilot customer, the founders got a “crash course” in the credit union industry, Kashyap says. From there they began a calculated expansion to ensure they didn’t grow faster than Posh’s revenue allowed, freeing them from having to raise venture capital.</p><p>The disciplined growth strategy at times forced Posh to get creative. Last year, as the founders were looking to build out new features and grow their team, they secured about $1.5 million in prepayments from eight credit unions in exchange for discounts on their service along with a peer-driven profit-sharing incentive. In total, the company has raised $2.5 million using that strategy.</p><p>Now on more secure financial footing, the founders are poised to accelerate Posh’s growth.</p><p><strong>Pushing the boundaries</strong></p><p>Even referring to today’s automated messaging platforms as chatbots seems generous. Most of the ones on the market today are only designed to understand what a user is asking for, something known as intent recognition.</p><p>The result is that many of the virtual agents in our lives, from the robotic telecom operator to Amazon’s Alexa to the remote control, take directions but struggle to hold a conversation. Posh’s chatbots go beyond intent recognition, using what Kashyap calls context understanding to figure out what users are saying based on the history of the conversation. The founders have a patent pending for the approach.</p><p>“[Context understanding] allows us to more intelligently understand user inputs and handle things like changes in topics without having the bots break,” Kashyap says. “One of our biggest pet peeves was, in order to have a successful interaction with a bot, you as a user have to be very unnatural sometimes to convey what you want to convey or the bot won’t understand you.”</p><p>Kashyap says context understanding is a lot easier to accomplish when designing bots for specific industries. That’s why Posh’s founders decided to start by focusing on credit unions.</p><p>“The platforms on the market today are almost spreading themselves too thin to make a deep impact in a particular vertical,” Kashyap says. “If you have banks and telecos and health care companies all using the same [chatbot] service, it’s as if they’re all sharing the same customer service rep. It’s difficult to have one person trained across all of these domains meaningfully.”</p><p>To onboard a new credit union, Posh uses the customer’s conversational data to train its deep learning model.</p><p>“The bots continue to train even after they go live and have actual conversations,” Kashyap says. “We’re always improving it; I don’t think we’ll ever deploy a bot and say it’s done.”</p><p>Customers can use Posh’s bots for online chats, voice calls, SMS messaging, and through third party channels like Slack, WhatsApp, and Amazon Echo. Posh also offers an analytics platform to help customers analyze what users are calling about.</p><p>For now, Kashyap says he’s focused on quadrupling the number of credit unions using Posh over the next year. Then again, the founders’ have never let short term business goals cloud their larger vision for the company.</p><p>“Our perspective has always been that [the robot assistant] Jarvis from ‘Iron Man’ and the AI from the movie ‘Her’ are going to be reality sometime soon,” Kashyap says. “Someone has to pioneer the ability for bots to have contextual awareness and memory persistence. I think there’s a lot more that needs to go into bots overall, but we felt by pushing the boundaries a little bit, we’d succeed where other bots would fail, and ultimately people would like to use our bots more than others.”</p>
The startup Posh has created chat bots that use conversational memory to hold more natural conversations. Posh’s chat bots are currently used by about a dozen credit unions across a variety of voice and text platforms. Image courtesy of Posh

Reducing delays in wireless networks Congestion control system could help streaming video, mobile games, and other applications run more smoothly. Thu, 09 Apr 2020 23:59:59 -0400 Rob Matheson | MIT News Office <p>MIT researchers have designed a congestion-control scheme for wireless networks that could help reduce lag times and increase quality in video streaming, video chat, mobile gaming, and other web services.</p><p>To keep web services running smoothly, congestion-control schemes infer information about a network’s bandwidth capacity and congestion based on feedback from the network routers, which is encoded in data packets. That information determines how fast data packets are sent through the network.</p><p>Deciding a good sending rate can be a tough balancing act. Senders don’t want to be overly conservative: If a network’s capacity constantly varies from, say, 2 megabytes per second to 500 kilobytes per second, the sender could always send traffic at the lowest rate. But then your Netflix video, for example, will be unnecessarily low-quality. On the other hand, if the sender constantly maintains a high rate, even when network capacity dips, it could &nbsp;overwhelm the network, creating a massive queue of data packets waiting to be delivered. Queued packets can increase the network’s delay, causing, say, your Skype call to freeze.</p><p>Things get even more complicated in wireless networks, which have “time-varying links,” with rapid, unpredictable capacity shifts. Depending on various factors, such as the number of network users, cell tower locations, and even surrounding buildings, capacities can double or drop to zero within fractions of a second. In a paper at the USENIX Symposium on Networked Systems Design and Implementation, the researchers presented “Accel-Brake Control” (ABC), a simple scheme that achieves about 50 percent higher throughput, and about half the network delays, on time-varying links.</p><p>The scheme relies on a novel algorithm that enables the routers to explicitly communicate how many data packets should flow through a network to avoid congestion but fully utilize the network. It provides that detailed information from bottlenecks — such as packets queued between cell towers and senders — by repurposing a single bit already available in internet packets. The researchers are already in talks with mobile network operators to test the scheme.</p><p>“In cellular networks, your fraction of data capacity changes rapidly, causing lags in your service. Traditional schemes are too slow to adapt to those shifts,” says first author Prateesh Goyal, a graduate student in CSAIL. “ABC provides detailed feedback about those shifts, whether it’s gone up or down, using a single data bit.”</p><p>Joining Goyal on the paper are Anup Agarwal, now a graduate student at Carnegie Melon University; Ravi Netravali, now an assistant professor of computer science at the University of California at Los Angeles; Mohammad Alizadeh, an associate professor in MIT’s Department of Electrical Engineering (EECS) and CSAIL; and Hari Balakrishnan, the Fujitsu Professor in EECS. The authors have all been members of the Networks and Mobile Systems group at CSAIL.</p><p><strong>Achieving explicit control</strong></p><p>Traditional congestion-control schemes rely on either packet losses or information from a single “congestion” bit in internet packets to infer congestion and slow down. A router, such as a base station, will mark the bit to alert a sender — say, a video server — that its sent data packets are in a long queue, signaling congestion. In response, the sender will then reduce its rate by sending fewer packets. The sender also reduces its rate if it detects a pattern of packets being dropped before reaching the receiver.</p><p>In attempts to provide greater information about bottlenecked links on a network path, researchers have proposed “explicit” schemes that include multiple bits in packets that specify current rates. But this approach would mean completely changing the way the internet sends data, and it has proved impossible to deploy.&nbsp;</p><p>“It’s a tall task,” Alizadeh says. “You’d have to make invasive changes to the standard Internet Protocol (IP) for sending data packets. You’d have to convince all Internet parties, mobile network operators, ISPs, and cell towers to change the way they send and receive data packets. That’s not going to happen.”</p><p><a name=”_gjdgxs”></a>With ABC, the researchers still use the available single bit in each data packet, but they do so in such a way that the bits, aggregated across multiple data packets, can provide the needed real-time rate information to senders. The scheme tracks each data packet in a round-trip loop, from sender to base station to receiver. The base station marks the bit in each packet with “accelerate” or “brake,” based on the current network bandwidth. When the packet is received, the marked bit tells the sender to increase or decrease the “in-flight” packets — packets sent but not received — that can be in the network.</p><p>If it receives an accelerate command, it means the packet made good time and the network has spare capacity. The sender then sends two packets: one to replace the packet that was received and another to utilize the spare capacity. When told to brake, the sender decreases its in-flight packets by one — meaning it doesn’t replace the packet that was received.</p><p>Used across all packets in the network, that one bit of information becomes a powerful feedback tool that tells senders their sending rates with high precision. Within a couple hundred milliseconds, it can vary a sender’s rate between zero and double. “You’d think one bit wouldn’t carry enough information,” Alizadeh says. “But, by aggregating single-bit feedback across a stream of packets, we can get the same effect as that of a multibit signal.”</p><p><strong>Staying one step ahead</strong></p><p>At the core of ABC is an algorithm that predicts the aggregate rate of the senders one round-trip ahead to better compute the accelerate/brake feedback.</p><p>The idea is that an ABC-equipped base station knows how senders will behave — maintaining, increasing, or decreasing their in-flight packets — based on how it marked the packet it sent to a receiver. The moment the base station sends a packet, it knows how many packets it will receive from the sender in exactly one round-trip’s time in the future. It uses that information to mark the packets to more accurately match the sender’s rate to the current network capacity.</p><p>In simulations of cellular networks, compared to traditional congestion control schemes, ABC achieves around 30 to 40 percent greater throughput for roughly the same delays. Alternatively, it can reduce delays by around 200 to 400 percent by maintaining the same throughput as traditional schemes. Compared to existing explicit schemes that were not designed for time-varying links, ABC reduces delays by half for the same throughput. “Basically, existing schemes get low throughput and low delays, or high throughput and high delays, whereas ABC achieves high throughput with low delays,” Goyal says.</p><p>Next, the researchers are trying to see if apps and web services can use ABC to better control the quality of content. For example, “a video content provider could use ABC’s information about congestion and data rates to pick the resolution of streaming video more intelligently,” Alizadeh says. “If it doesn’t have enough capacity, the video server could lower the resolution temporarily, so the video will continue playing at the highest possible quality without freezing.”</p>
To reduce lag times and increase quality in video streaming, mobile gaming, and other web services, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have designed a congestion-control scheme for time-varying wireless links, such as cellular networks. Image: José-Luis Olivares, MIT

Learning about artificial intelligence: A hub of MIT resources for K-12 students New website offers a combination of learning units, hands-on activities, and mentor guides to foster AI literacy. Tue, 07 Apr 2020 14:50:01 -0400 MIT Media Lab <p>In light of the recent events surrounding Covid-19, learning for grades K-12 looks very different than it did a month ago. Parents and educators may be feeling overwhelmed about turning their&nbsp;homes into classrooms.&nbsp;</p> <p>With that in mind, a team led by Media Lab Associate Professor Cynthia Breazeal has launched <a href=””></a> to share a variety of online activities for K-12 students to learn about artificial intelligence, with a focus on how to design and use it responsibly. Learning resources provided on this website can help to address the needs of the millions of children, parents, and educators worldwide who are staying at home due to school closures caused by Covid-19, and are looking for free educational activities that support project-based STEM learning in an exciting and innovative area.&nbsp;</p> <p>The website is a collaboration between the Media Lab, MIT Stephen A. Schwarzman College of Computing, and MIT Open Learning, serving as a hub to highlight diverse work by faculty, staff, and students across the MIT community at the intersection of AI, learning, and education.&nbsp;</p> <p>“MIT is the birthplace of Constructionism under Seymour Papert. MIT has revolutionized how children learn computational thinking with hugely successful platforms such as Scratch and App Inventor. Now, we are bringing this rich tradition and deep expertise to how children learn about AI through project-based learning that dovetails technical concepts with ethical design and responsible use,” says Breazeal.&nbsp;</p> <p>The website will serve as a hub for MIT’s latest work in innovating learning and education in the era of AI. In addition to highlighting research, it also features up-to-date project-based activities, learning units, child-friendly software tools, digital interactives, and other supporting materials, highlighting a variety of MIT-developed educational research and collaborative outreach efforts across and beyond MIT. The site is intended for use by students, parents, teachers, and lifelong learners alike, with resources for children and adults at all learning levels, and with varying levels of comfort with technology, for a range of artificial intelligence topics. The team has also gathered a variety of external resources to explore, such as Teachable Machines by Google, a browser-based platform that lets users train classifiers for their own image-recognition algorithms in a user-friendly way.</p> <p>In the spirit of “mens et manus” — the MIT motto, meaning “mind and hand” — the vision of technology for learning at MIT is about empowering and inspiring learners of all ages in the pursuit of creative endeavors. The activities highlighted on the new website are designed in the tradition of constructionism: learning through project-based experiences in which learners build and share their work. The approach is also inspired by the idea of computational action, where children can design AI-enabled technologies to help others in their community.</p> <p>“MIT has been a world leader in AI since the 1960s,” says MIT professor of computer science and engineering Hal Abelson, who has long been involved in MIT’s AI research and educational technology. “MIT’s approach to making machines intelligent has always been strongly linked with our work in K-12 education. That work is aimed at empowering young people through computational ideas that help them understand the world and computational actions that empower them to improve life for themselves and their communities.”</p> <p>Research in computer science education and AI education highlights the importance of having a mix of plugged and unplugged learning approaches. Unplugged activities include kinesthetic or discussion-based activities developed to introduce children to concepts in AI and its societal impact without using a computer. Unplugged approaches to learning AI are found to be especially helpful for young children. Moreover, these approaches can also be accessible to learning environments (classrooms and homes) that have limited access to technology.&nbsp;</p> <p>As computers continue to automate more and more routine tasks, inequity of education remains a key barrier to future opportunities, where success depends increasingly on intellect, creativity, social skills, and having specific skills and knowledge. This accelerating change raises the critical question of how to best prepare students, from children to lifelong learners, to be successful and to flourish in the era of AI.</p> <p>It is important to help prepare a diverse and inclusive citizenry to be responsible designers and conscientious users of AI. In that spirit, the activities on <a href=””></a>&nbsp;range from hands-on programming to paper prototyping, to Socratic seminars, and even creative writing about speculative fiction. The learning units and project-based activities are designed to be accessible to a wide audience with different backgrounds and comfort levels with technology. A number of these activities leverage learning about AI as a way to connect to the arts, humanities, and social sciences, too, offering a holistic view of how AI intersects with different interests and endeavors.&nbsp;</p> <p>The rising ubiquity of AI affects us all, but today a disproportionately small slice of the population has the skills or power to decide how AI is designed or implemented; worrying consequences have been seen in algorithmic bias and perpetuation of unjust systems. Democratizing AI through education, starting in K-12, will help to make it more accessible and diverse at all levels, ultimately helping to create a more inclusive, fair, and equitable future.</p>
A mural of hopes and questions about artificial intelligence from a middle school workshop Image courtesy of the Personal Robots group/MIT Media Lab

Computational thinking class enables students to engage in Covid-19 response Nearly 300 students join an open course that applies data science, artificial intelligence, and mathematical modeling using the Julia language to study Covid-19. Tue, 07 Apr 2020 14:25:01 -0400 Sandi Miller | Department of Mathematics <p>When an introductory computational science class, which is open to the general public, was repurposed to study the Covid-19 pandemic this spring, the instructors saw student registration rise from 20 students to nearly 300.</p> <p><a href=”″ target=”_blank”>Introduction to Computational Thinking</a> (6.S083/18.S190), which applies data science, artificial intelligence, and mathematical models using the Julia programming language developed at MIT, was introduced in the fall as a pilot half-semester class. It was launched as part of the MIT Stephen A. Schwarzman College of Computing’s computational thinking program and spearheaded by Department of Mathematics Professor <a href=”″>Alan Edelman</a> and Visiting Professor <a href=””>David P.&nbsp;Sanders</a>. They very quickly were able to fast-track the curriculum to focus on applications to Covid-19 responses; students were equally fast in jumping on board.</p> <p>“Everyone at MIT wants to contribute,” says Edelman. “While we at the <a href=””>Julia Lab</a> are doing research in building tools for scientists, Dave and I thought it would be valuable to teach the students about some of the fundamentals related to computation for drug development, disease models, and such.”&nbsp;</p> <p>The course is offered through MIT’s Department of Electronic Engineering and Computer Science and the Department of Mathematics. “This course opens a trove of opportunities to use computation to better understand and contain the Covid-19 pandemic,” says MIT Computer Science and Artificial Intelligence Laboratory Director <a href=””>Daniela Rus</a>.</p> <p>The fall version of the class had a maximum enrollment of 20 students, but the spring class has ballooned to nearly 300 students in one weekend, almost all from MIT. “We’ve had a tremendous response,” Edelman says.&nbsp;“This definitely stressed the MIT sign-up systems in ways that I could not have imagined.”</p> <p>Sophomore Shinjini Ghosh, majoring in computer science and linguistics, says she was initially drawn to the class to learn Julia, “but also to develop the skills to do further computational modeling and conduct research on the spread and possible control of Covid-19.”</p> <p>”There’s been a lot of misinformation about the epidemiology and statistical modeling of the coronavirus,” adds sophomore Raj Movva, a computer science and biology major. “I think this class will help clarify some details, and give us a taste of how one might actually make predictions about the course of a pandemic.”&nbsp;</p> <p>Edelman says that he has always dreamed of an interdisciplinary modern class that would combine the machine learning and AI of a “data-driven” world, the modern software and systems possibilities that Julia allows, and the physical models, differential equations, and&nbsp; scientific machine learning of the “physical world.”&nbsp;</p> <p>He calls this class “a natural outgrowth of <a href=””>Julia Lab’s</a> research, and that of the general cooperative open-source Julia community.” For years, this online community collaborates to create tools to speed up the drug approval process, aid in scientific machine learning and differential equations, and predict infectious disease transmission. “The lectures are open to the world, following the great MIT tradition of MIT open courses,” says Edelman.</p> <p>So when MIT turned to virtual learning to de-densify campus, the transition to an online, remotely taught version of the class was not too difficult for Edelman and Sanders.</p> <p>”Even though we have run open remote learning courses before, it’s never the same as being able to see the live audience in front of you,” says Edelman.&nbsp;“However, MIT students ask such great questions in the Zoom chat, so that it remains as intellectually invigorating as ever.”</p> <p>Sanders, a Marcos Moshinsky research fellow currently on leave as a professor at the National University of Mexico, is working on techniques for accelerating global optimization. Involved with the Julia Lab since 2014, Sanders has worked with Edelman on various teaching, research, and outreach projects related to Julia, and his <a href=””>YouTube tutorials</a> have reached over 100,000 views. “His videos have often been referred to as the best way to learn the Julia language,” says Edelman.</p> <p>Edelman will also be enlisting some help from Philip, his family’s Corgi who until recently had been a frequent wanderer of MIT’s halls and classrooms. “Philip is a well-known Julia expert whose image has been classified many times by Julia’s AI Systems,” says Edelman. “Students are always happy when Philip participates in the online classes.”</p>
Left to right: Department of Mathematics Professor Alan Edelman, his co-instructor and family Corgi Philip, and visiting professor and longtime Julia lab collaborator David Sanders have altered their computational thinking course to encourage input on COVID-19 responses.

Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music Translated into sound, SARS-CoV-2 tricks our ear in the same way the virus tricks our cells. Thu, 02 Apr 2020 16:50:01 -0400 Kim Martineau | MIT Quest for Intelligence <p><em>The proteins that make up all living things are alive with music. Just ask Markus Buehler: The musician and MIT professor develops artificial intelligence models to design new proteins, sometimes by <a href=”” target=”_blank”>translating them into sound</a>. His goal is to create new biological materials for sustainable, non-toxic applications. In a project with the&nbsp;<a href=””>MIT-IBM Watson AI Lab</a>, Buehler is searching for a protein to extend the shelf-life of perishable food. In a&nbsp;<a href=”″>new study</a>&nbsp;in </em>Extreme Mechanics Letters<em>, he and his colleagues offer a promising candidate: a silk protein made by honeybees for use in hive building.&nbsp;</em></p> <p><em>In&nbsp;<a href=”″>another recent study</a>, in </em>APL Bioengineering<em>, he went a step further and used AI discover an entirely new protein. As both studies went to print, the Covid-19 outbreak was surging in the United States, and Buehler turned his attention to the spike protein of SARS-CoV-2, the appendage that makes the novel coronavirus so contagious. He and his colleagues are trying to <a href=”” target=”_blank”>unpack its vibrational properties</a> through molecular-based sound spectra, which could hold one key to stopping the virus. Buehler recently sat down to discuss the art and science of his work. </em></p> <p><strong>Q:&nbsp;</strong>Your work focuses on the alpha helix proteins found in skin and hair. Why makes this protein so intriguing?<strong>&nbsp;</strong></p> <p><strong>A:</strong> Proteins are the bricks and mortar that make up our cells, organs, and body. Alpha helix proteins are especially important. Their spring-like structure gives them elasticity and resilience, which is why skin, hair, feathers, hooves, and even cell membranes are so durable. But they’re not just tough mechanically, they have built-in antimicrobial properties. With IBM, we’re trying to harness this biochemical trait to create a protein coating that can slow the spoilage of quick-to-rot foods like strawberries.</p> <p><strong>Q:&nbsp;</strong>How did you enlist AI to produce this silk protein?</p> <p><strong>A:</strong>&nbsp;We trained a deep learning model on the Protein Data Bank, which contains the amino acid sequences and three-dimensional shapes of about 120,000 proteins. We then fed the model a snippet of an amino acid chain for honeybee silk and asked it to predict the protein’s shape, atom-by-atom. We validated our work by synthesizing the protein for the first time in a lab — a first step toward developing a thin antimicrobial, structurally-durable coating that can be applied to food. My colleague,&nbsp;<a href=””>Benedetto Marelli</a>, specializes in this part of the process. We also used the platform to predict the structure of proteins that don’t yet exist in nature. That’s how we designed our entirely new protein in the <em>APL Bioengineering</em> study.&nbsp;</p> <p><strong>Q: </strong>How does your model improve on other protein prediction methods?<strong>&nbsp;</strong></p> <p><strong>A:</strong> We use end-to-end prediction. The model builds the protein’s structure directly from its sequence, translating amino acid patterns into three-dimensional geometries. It’s like translating a set of IKEA instructions into a built bookshelf, minus the frustration. Through this approach, the model effectively learns how to build a protein from the protein itself, via the language of its amino acids. Remarkably, our method can accurately predict protein structure without a template. It outperforms other folding methods and is significantly faster than physics-based modeling. Because the Protein Data Bank is limited to proteins found in nature, we needed a way to visualize new structures to make new proteins from scratch.</p> <p><strong>Q: </strong>How could the model be used to design an actual protein?</p> <p><strong>A:</strong> We can build atom-by-atom models for sequences found in nature that haven’t yet been studied, as we did in the <em>APL Bioengineering</em> study using a different method. We can visualize the protein’s structure and use other computational methods to assess its function by analyzing its stablity and the other proteins it binds to in cells. Our model could be used in drug design or to interfere with protein-mediated biochemical pathways in infectious disease.</p> <p><strong>Q:&nbsp;</strong>What’s the benefit of translating proteins into sound?</p> <p><strong>A:</strong> Our brains are great at processing sound! In one sweep, our ears pick up all of its hierarchical features: pitch, timbre, volume, melody, rhythm, and chords. We would need a high-powered microscope to see the equivalent detail in an image, and we could never see it all at once. Sound is such an elegant way to access the information stored in a protein.&nbsp;</p> <p>Typically, sound is made from vibrating a material, like a guitar string, and music is made by arranging sounds in hierarchical patterns. With AI we can combine these concepts, and use molecular vibrations and neural networks to construct new musical forms. We’ve been working on methods to turn protein structures into audible representations, and translate these representations into new materials.&nbsp;</p> <p><strong>Q: </strong>What can the <a href=”” target=”_blank”>sonification of SARS-CoV-2’s “spike” protein</a> tell us?</p> <p><strong>A: </strong>Its protein spike&nbsp;contains three protein chains folded into an <a href=”” target=”_blank”>intriguing pattern</a>. These structures are too small for the eye to see, but they can be heard. We represented the physical protein structure, with its entangled chains, as interwoven melodies that form a multi-layered composition. The spike protein’s amino acid sequence, its secondary structure patterns, and its intricate three-dimensional folds are all featured. The resulting piece is a form of counterpoint music, in which notes are played against notes. Like a symphony, the musical patterns reflect the protein’s intersecting geometry realized by materializing its DNA code.</p> <p><strong>Q: </strong>What did you learn?</p> <p><strong>A: </strong>The virus has an uncanny ability to deceive and exploit the host for its own multiplication. Its genome hijacks the host cell’s protein manufacturing machinery, and forces it to replicate the viral genome and produce viral proteins to make new viruses. As you listen, you may be surprised by the pleasant, even relaxing, tone of the music. But it tricks our ear in the same way the virus tricks our cells. It’s an invader disguised as a friendly visitor. Through music, we can see the SARS-CoV-2 spike from a new angle, and appreciate the urgent need to learn the language of proteins.&nbsp;&nbsp;</p> <p><strong>Q: </strong>Can any of this address Covid-19, and the virus that causes it?</p> <p><strong>A:</strong>&nbsp;In the longer term, yes. Translating proteins into sound gives scientists another tool to understand and design proteins. Even a small mutation can limit or enhance the pathogenic power of SARS-CoV-2. Through sonification, we can also compare the biochemical processes of its spike protein with previous coronaviruses, like SARS or MERS.&nbsp;</p> <p>In the music we created, we analyzed the vibrational structure of the spike protein that infects the host. Understanding these vibrational patterns is critical for drug design and much more. Vibrations may change as temperatures warm, for example, and they may also tell us why the SARS-CoV-2 spike gravitates toward human cells more than other viruses. We’re exploring these questions in current, ongoing research with my graduate students.&nbsp;</p> <p>We might also use a compositional approach to design drugs to attack the virus. We could search for a new protein that matches the melody and rhythm of an antibody capable of binding to the spike protein, interfering with its ability to infect.</p> <p><strong>Q: </strong>How can music aid protein design?</p> <p><strong>A:</strong> You can think of music as an algorithmic reflection of structure. Bach’s Goldberg Variations, for example, are a brilliant realization of counterpoint, a principle we’ve also found in proteins. We can now hear this concept as nature composed it, and compare it to ideas in our imagination, or use AI to speak the language of protein design and let it imagine new structures. We believe that the analysis of sound and music can help us understand the material world better. Artistic expression is, after all, just a model of the world within us and around us. &nbsp;</p> <p><em>Co-authors of the study in </em>Extreme Mechanics Letters<em> are: Zhao Qin, Hui Sun, Eugene Lim and Benedetto Marelli at MIT; and Lingfei Wu, Siyu Huo, Tengfei Ma and Pin-Yu Chen at IBM Research. Co-author of the study in </em>APL Bioengineering<em> is Chi-Hua Yu. Buehler’s sonification work is supported by MIT’s Center for Art, Science and Technology (CAST) and the Mellon Foundation.&nbsp;</em></p>
MIT Professor Markus Buehler designs new proteins with the help of artificial intelligence. He recently translated the spike protein of the novel coronavirus (SARS-Cov-2) into sound to visualize its vibrational properties, as seen here, which could help in finding ways to stop the virus. Primary colors represent the spike’s three protein chains. Image courtesy of Markus Buehler.

Neural networks facilitate optimization in the search for new materials Sorting through millions of possibilities, a search for battery materials delivered results in five weeks instead of 50 years. Thu, 26 Mar 2020 00:00:00 -0400 David L. Chandler | MIT News Office <p>When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.</p><p>As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.</p><p>The findings are reported in the journal <em>ACS Central Science</em>, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet PhD ’19, Sahasrajit Ramesh, and graduate student Chenru Duan.</p><p>The study looked at a set of materials called transition metal complexes. These can exist in a vast number of different forms, and Kulik says they “are really fascinating, functional materials that are unlike a lot of other material phases. The only way to understand why they work the way they do is to study them using quantum mechanics.”</p><p>To predict the properties of any one of millions of these materials would require either time-consuming and resource-intensive spectroscopy and other lab work, or time-consuming, highly complex physics-based computer modeling for each possible candidate material or combination of materials. Each such study could consume hours to days of work.</p><p>Instead, Kulik and her team took a small number of different possible materials and used them to teach an advanced machine-learning neural network about the relationship between the materials’ chemical compositions and their physical properties. That knowledge was then applied to generate suggestions for the next generation of possible materials to be used for the next round of training of the neural network. Through four successive iterations of this process, the neural network improved significantly each time, until reaching a point where it was clear that further iterations would not yield any further improvements.</p><p>This iterative optimization system greatly streamlined the process of arriving at potential solutions that satisfied the two conflicting criteria being sought. This kind of process of finding the best solutions in situations, where improving one factor tends to worsen the other, is known as a Pareto front, representing a graph of the points such that any further improvement of one factor would make the other worse. In other words, the graph represents the best possible compromise points, depending on the relative importance assigned to each factor.</p><p>Training typical neural networks requires very large data sets, ranging from thousands to millions of examples, but Kulik and her team were able to use this iterative process, based on the Pareto front model, to streamline the process and provide reliable results using only the few hundred samples.</p><p>In the case of screening for the flow battery materials, the desired characteristics were in conflict, as is often the case: The optimum material would have high solubility and a high energy density (the ability to store energy for a given weight). But increasing solubility tends to decrease the energy density, and vice versa.</p><p>Not only was the neural network able to rapidly come up with promising candidates, it also was able to assign levels of confidence to its different predictions through each iteration, which helped to allow the refinement of the sample selection at each step. “We developed a better than best-in-class uncertainty quantification technique for really knowing when these models were going to fail,” Kulik says.</p><p>The challenge they chose for the proof-of-concept trial was materials for use in redox flow batteries, a type of battery that holds promise for large, grid-scale batteries that could play a significant role in enabling clean, renewable energy. Transition metal complexes are the preferred category of materials for such batteries, Kulik says, but there are too many possibilities to evaluate by conventional means. They started out with a list of 3 million such complexes before ultimately whittling that down to the eight good candidates, along with a set of design rules that should enable experimentalists to explore the potential of these candidates and their variations.</p><p>“Through that process, the neural net both gets increasingly smarter about the [design] space, but also increasingly pessimistic that anything beyond what we’ve already characterized can further improve on what we already know,” she says.</p><p>Apart from the specific transition metal complexes suggested for further investigation using this system, she says, the method itself could have much broader applications. “We do view it as the framework that can be applied to any materials design challenge where you’re really trying to address multiple objectives at once. You know, all of the most interesting materials design challenges are ones where you have one thing you’re trying to improve, but improving that worsens another. And for us, the redox flow battery redox couple was just a good demonstration of where we think we can go with this machine learning and accelerated materials discovery.”</p><p>For example, optimizing catalysts for various chemical and industrial processes is another kind of such complex materials search, Kulik says. Presently used catalysts often involve rare and expensive elements, so finding similarly effective compounds based on abundant and inexpensive materials could be a significant advantage.</p><p>“This paper represents, I believe, the first application of multidimensional directed improvement in the chemical sciences,” she says. But the long-term significance of the work is in the methodology itself, because of things that might not be possible at all otherwise. “You start to realize that even with parallel computations, these are cases where we wouldn’t have come up with a design principle in any other way. And these leads that are coming out of our work, these are not necessarily at all ideas that were already known from the literature or that an expert would have been able to point you to.”</p><p>“This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications,” says George Schatz, a professor of chemistry and of chemical and biological engineering at Northwestern University, who was not associated with this work. He says this research addresses “how to do machine learning when there are multiple objectives. Kulik’s approach uses leading edge methods to train an artificial neural network that is used to predict which combination of transition metal ions and organic ligands will be best for redox flow battery electrolytes.”</p><p>Schatz says “this method can be used in many different contexts, so it has the potential to transform machine learning, which is a major activity around the world.”</p><p>The work was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the Burroughs Wellcome Fund, and the AAAS Mar ion Milligan Mason Award.</p>
An iterative, multi-step process for training a neural network, as depicted at top left, leads to an assessment of the tradeoffs between two competing qualities, as depicted in graph at center. The blue line represents a so-called Pareto front, defining the cases beyond which the materials selection cannot be further improved. This makes it possible to identify specific categories of promising new materials, such as the one depicted by the molecular diagram at right. Image: courtesy of the researchers

“Inactive” pill ingredients could raise the dose of your medication With help from artificial intelligence, researchers identify hidden power of vitamin A and ordinary chewing gum glaze. Tue, 17 Mar 2020 15:20:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>The average medication contains a mix of eight “inactive” ingredients added to pills to make them taste better, last longer, and&nbsp;stabilize the active ingredients within.&nbsp;Some of those additives are now getting a closer look for their ability to cause allergic reactions in some patients. But now, in a new twist, MIT researchers have discovered that two other inactive ingredients may actually boost medication strength to the benefit of some patients.&nbsp;</p> <p>In a study published March 17 in&nbsp;<em>Cell Reports,</em>&nbsp;researchers report that vitamin A&nbsp;palmitate, a common supplement, and gum resin, a popular glazing agent for pills and chewing gum&nbsp;— could make hundreds of drugs more effective, from blood-clotting agents and anti-cancer drugs to over-the-counter pain relievers. They also outline a method for using machine learning to find other inactive ingredients with untapped therapeutic value.&nbsp;</p> <p>“Anything you ingest has a potential effect, but tracing that effect to the molecular level can be a Herculean effort,” says the study’s senior author&nbsp;<a href=””>Giovanni Traverso</a>, an assistant professor in the Department of Mechanical Engineering and a gastroenterologist at Brigham and Women’s Hospital. “Machine learning gives you a way to narrow down the search space.”</p> <p>The researchers&nbsp;chose to focus their search on two&nbsp;proteins in the body&nbsp;known for their outsized role in drug delivery: the transporter protein&nbsp;P-glycoprotein (P-gp) and the&nbsp;metabolic protein&nbsp;UDP-Glucuronosyltranferase-2B7 (UGT2B7).&nbsp;One or both&nbsp;are involved in modulating the effects of 20 percent of the nearly 1,900 drugs&nbsp;approved by the U.S. Food and Drug Administration (FDA).&nbsp;</p> <p>The researchers wanted to know if any of the FDA’s 800 approved food and drug additives would skew the functioning of either protein. Screening all 800 compounds by hand would be tedious and expensive. So, instead, they built a computer platform to do the work for them, adapting a method used by pharmaceutical companies to rule out drug-on-drug interactions.</p> <p>They fed the system the chemical structures of the FDA’s 800 inactive ingredients, as well as millions of drugs and other compounds known to interfere with enzyme functioning. They then asked the platform to predict which food and drug additives would be most likely to disrupt P-gp and UGT2B7 and alter a drug’s potency by letting more into the body, in the case of P-gp, or slowing its exit, as in UGT2B7.</p> <p>Machine learning allowed the researchers to quickly make comparisons between millions of drugs and inactive ingredients to identify the additives most likely to have an effect.&nbsp;Two top candidates emerged: vitamin A palmitate, as a predicted inhibitor of P-gp, and abietic acid, an ingredient in gum resin (basically, tree sap), as a predicted inhibitor of UGT2B7.</p> <p>The researchers next moved to physically test the computer’s predictions in the lab. In one experiment, they gave mice vitamin A-fortified water followed by a normal dose of the blood-thinner, warfarin. With a simple blood test, they confirmed the mice had absorbed 30 percent more medication, a strong indication that vitamin A&nbsp;had improved the uptake of warfarin.&nbsp;</p> <p>In a second experiment, they treated a small slice of pig liver with a substance that loses its ability to fluoresce as UGT2B7 digests it. When abietic acid was added, the substance continued to fluoresce. Drug developers use the test to confirm that a drug acts as an enzyme inhibitor, and here, researchers confirmed that abietic acid had, in fact, targeted UGT2B7 as predicted. Though no actual drug was tested, the results suggest that if gum resin were taken with a&nbsp;common pain reliever like ibuprofen, it could increase its strength, Traverso says, much as vitamin A had for warfarin in mice.</p> <p>Machine learning methods are increasingly helping to identify and design new drugs. In a recent discovery, MIT researchers used a deep learning algorithm to find an entirely new antibiotic&nbsp;in the&nbsp;Drug Repurposing Hub, a&nbsp;database of compounds approved, or under review, for human use.&nbsp;Hiding in plain sight as a proposed diabetes treatment,&nbsp;<a href=””>the compound</a>&nbsp;was identified because the algorithm had no preconceived&nbsp;ideas of what a bacteria-killing agent should look like.</p> <p>Much like the Drug Repurposing Hub, the FDA’s inactive ingredient list is a big draw for drug developers. The ingredients are already on the market, even if they have yet to be approved for a new&nbsp;use, says the study’s lead author,&nbsp;<a href=””>Daniel Reker,</a>&nbsp;a Swiss National Science Foundation postdoc at MIT’s&nbsp;<a href=””>Koch Institute for Integrative Cancer Research</a>.&nbsp;If a promising biological association is uncovered, the discovery can be moved quickly to clinical trials. It can take years, by contrast, to test the safety of new molecules synthesized or discovered in the lab.&nbsp;</p> <p>“While further tests are necessary to understand how strong these effects are in humans, our algorithms drew new conclusions that&nbsp;could have immediate impact,”&nbsp;says Reker. “Drug discovery is such a long and costly process, we’re excited that machine learning can help to improve the odds.”&nbsp;</p> <p>The team became interested in the hidden effects of inactive ingredients after a patient of Traverso’s with celiac disease grew sicker after taking a drug later found to contain gluten. Traverso and his colleagues wondered what other supposedly inert ingredients might be affecting patients.&nbsp;</p> <p>Sifting through the ingredient lists of some 42,000 medications sold in the United States, they found that more than half contained at least one type of sugar that people with irritable bowel syndrome are told to avoid; 45 percent contained lactose; and a third contained a food dye linked to allergic reactions.&nbsp;Though one pill might not contain enough of an ingredient to cause trouble, it could add up in patients on multiple medications, the researchers warned.&nbsp;They&nbsp;<a href=”″>reported</a>&nbsp;their findings last year in&nbsp;<em>Science Translational Medicine.</em></p> <p>In the current study, the researchers chose to explore the therapeutic value of those same ingredients. Comparing the chemical structures of the 800 “inactive” ingredients with nearly 1,900 approved&nbsp;drug compounds, they found a surprising amount of overlap in their chemical structures and properties. That motivated them to try and predict the biological effects of all 800 ingredients. In the end, they found thousands of previously undocumented effects, an indication that other beneficial compounds beyond vitamin A palmitate and gum resin may be awaiting discovery.</p> <p>If confirmed in clinical trials, vitamin A and gum resin could be added to hundreds of drugs modulated by the P-gp or UGT2B7 proteins, including cancer-fighting drugs, to improve uptake of the medicine or reduce the amount needed in formulations, the researchers say. Vitamin A, gum rosin, and compounds like them, might also be a starting point for entirely new drugs.&nbsp;</p> <p>“As machine learning brings us closer to a more personalized form of medicine, doctors will be able to treat patients more effectively by taking into account their diet, the inactive ingredients in their medications, and other factors,” says Reker.</p> <p>The study was funded, in part, by the MIT-IBM Watson AI Lab and the National Institutes of Health.</p>
An algorithm designed to predict the biological effects of 800 “inactive” pill ingredients uncovered thousands of previously undocumented effects. In this visualization of the researchers’ results, inactive ingredients are shown in blue and their protein targets, in red. The number of predicted, and yet-to-be-verified, interactions (gray edges) outnumber those documented so far (in black). Image: Daniel Reker/Koch Institute

Deep learning for mechanical property evaluation New technique allows for more precise measurements of deformation characteristics using nanoindentation tools. Mon, 16 Mar 2020 14:59:59 -0400 David L. Chandler | MIT News Office <p>A standard method for testing some of the mechanical properties of materials is to poke them with a sharp point. This “indentation technique” can provide detailed measurements of how the material responds to the point’s force, as a function of its penetration depth.</p><p>With advances in nanotechnology during the past two decades, the indentation force can be measured to a resolution on the order of one-billionth of a Newton (a measure of the force approximately equivalent to the force you feel when you hold a medium-sized apple in your hand), and the sharp tip’s penetration depth can be captured to a resolution as small as a nanometer, or about 1/100,000 the diameter of a human hair. Such instrumented nanoindentation tools have provided new opportunities for probing physical properties in a wide variety of materials, including metals and alloys, plastics, ceramics, and semiconductors.</p><p>But while indentation techniques, including nanoindentation, work well for measuring some properties, they exhibit large errors when probing plastic properties of materials — the kind of permanent deformation that happens, for example, if you press your thumb into a piece of silly putty and leave a dent, or when you permanently bend a paper clip using your fingers. Such tests can be important in a wide variety of industrial applications, including conventional and digital manufacturing (3-D printing) of metallic structures, material quality assurance of engineering parts, and optimization of performance and cost. However, conventional indentation tests and existing methods to extract critical properties can be highly inaccurate.</p><p>Now, an international research team comprising researchers from MIT, Brown University, and Nanyang Technological University (NTU) in Singapore has developed a new analytical technique that can improve the estimation of mechanical properties of metallic materials from instrumented indention, with as much as 20 times greater accuracy than existing methods. Their findings are described today in the <em>Proceedings of the National Academy of Sciences</em>, in a paper combining indentation experiments with computational modeling of materials using the latest machine learning tools.</p><p>The team includes co-lead and senior author Ming Dao, a principal research scientist at MIT, and senior author Subra Suresh, MIT Vannevar Bush Professor Emeritus who is president and distinguished university professor at NTU Singapore. Their co-authors are doctoral student Lu Lu and Professor George Em Karniadakis of Brown University and research fellow Punit Kumar and Professor Upadrasta Ramamurty of NTU Singapore.</p> <p><img alt=”” src=”/sites/default/files/images/inline/images/mechanical-properties.gif” style=”width: 500px; height: 318px;” /></p> <p><em><span style=”font-size:10px;”>Animation showing schematically the process of extracting mechanical properties from indentation tests. It is a challenging task to accurately obtain the yield strength and nonlinear mechanical behavior from indention tests. Courtesy of the researchers.</span></em></p><p><strong>“Small” challenges beyond elasticity</strong></p><p>“Indentation is a very good method for testing mechanical properties,” Dao says, especially in cases where only small samples are available for testing. “When you try to develop new materials, you often have only a small quantity, and you can use indentation or nanoindentation to test really small quantities of materials,” he says.</p><p>Such testing can be quite accurate for elastic properties — that is, situations where the material bounces back to its original shape after having been poked. But when the applied force goes beyond the material’s “yield strength” — the point at which the poking leaves a lasting mark on the surface — this is called plastic deformation, and traditional indentation testing becomes much less accurate. “In fact, there’s no widely available method that’s being used” that can produce reliable information in such cases, Dao says.</p><p>Indentation can be used to determine hardness, but Dao explains that “hardness is only a combination of a material’s elastic and plastic properties. It’s not a ‘clean’ parameter that can be used directly for design purposes. … But properties at or beyond yield strength, the strength denoting the point at which the material begins to deform irreversibly, are important to access the material’s suitability for engineering applications.”</p><p><strong>Technique demands smaller amounts of high-quality data</strong></p><p>The new method does not require any changes to experimental equipment or operation, but rather provides a way to work with the data to improve the accuracy of its predictions. By using an advanced neural network machine-learning system, the team found that a carefully planned integration of both real experimental data and computer-generated “synthetic” data of different levels of accuracy (a so-called multifidelity approach to deep learning) can produce the kind of quick and simple yet highly accurate data that industrial applications require for testing materials.</p><p>Traditional machine learning approaches require large amounts of high-quality data. However, detailed experiments on actual material samples are time-consuming and expensive to conduct. But the team found that doing the neural network training with lots of low-cost synthetic data and then incorporating a relatively small number of real experimental data points — somewhere between three and 20, as compared with 1,000 or more accurate, albeit high-cost, datasets — can substantially improve the accuracy of the outcome. In addition, they utilize established scaling laws to further reduce the number of training datasets needed in covering the parameter space for all engineering metals and alloys.</p><p>What’s more, the authors found that the majority of the time-consuming training process can be done ahead of time, so that for evaluating the actual tests a small number of real experimental results can be added for “calibration” training just when they’re needed, and give highly accurate results.</p><p><img alt=”” src=”/sites/default/files/images/inline/images/mechanical-properties-2.gif” style=”width: 500px; height: 344px;” /></p> <p><em><span style=”font-size:10px;”>Animation illustrating the key features and advantages of the novel “multi-fidelity” deep learning method. Courtesy of the researchers.</span></em></p> <p><strong>Applications for&nbsp;digital manufacturing and more</strong></p><p>These multifidelity deep-learning approaches have been validated using conventionally manufactured aluminum alloys as well as 3-D-printed titanium alloys.</p><p>Professor Javier Llorca, scientific director of IMDEA Materials Institute in Madrid, who was not connected with this research, says, “The new approach takes advantage of novel machine learning strategies to improve the accuracy of the predictions and has a large potential for fast screening of the mechanical properties of components manufactured by 3-D printing. It will allow one to discriminate the differences in the mechanical properties in different regions of the 3-D-printed components, leading to more accurate designs.”</p><p>Professor Ares Rosakis at Caltech, who also was not connected with this work, says this approach “results in remarkable computational efficiency and in unprecedented predictive accuracy of the mechanical properties. … Most importantly, it provides a previously unavailable, fresh pair of eyes for ensuring mechanical property uniformity as well as manufacturing reproducibility of 3D-printed components of complex geometry for which classical testing is impossible.”</p><p>In principle, the basic process they use could be extended and applied to many other kinds of problems involving machine-learning, Dao says. “This idea, I think, can be generalized to solve other challenging engineering problems.” The use of the real experimental data helps to compensate for the idealized conditions assumed in the synthetic data, where the shape of the indenter tip is perfectly sharp, the motion of the indenter is perfectly smooth, and so on. By using “hybrid” data that includes both the idealized and the real-world situations, “the end result is a drastically reduced error,” he says.</p><p>The work was supported by the Army Research Laboratory, the U.S. Department of Energy, and the Nanyang Technical University Distinguished University Professorship.</p>
An international research team used an advanced neural network machine-learning system to improve the accuracy of tests probing the plastic properties of materials — which can be important in a wide variety of industrial applications.

The elephant in the server room Catherine D’Ignazio’s new book, “Data Feminism,” examines problems of bias and power that beset modern information. Mon, 09 Mar 2020 00:00:00 -0400 Peter Dizikes | MIT News Office <p>Suppose you would like to know mortality rates for women during childbirth, by country, around the world. Where would you look? One option is the <a href=”” target=”_blank”>WomanStats</a> Project, the website of an academic research effort investigating the links between the security and activities of nation-states, and the security of the women who live in them.</p><p>The project, founded in 2001, meets a need by patching together data from around the world. Many countries are indifferent to collecting statistics about women’s lives. But even where countries try harder to gather data, there are clear challenges to arriving at useful numbers — whether it comes to women’s physical security, property rights, and government participation, among many other issues. &nbsp;</p><p>For instance: In some countries, violations of women’s rights may be reported more regularly than in other places. That means a more responsive legal system may create the appearance of greater problems, when it provides relatively more support for women. The WomanStats Project notes many such complications.</p><p>Thus the WomanStats Project offers some answers — for example, Australia, Canada, and much of Western Europe have low childbirth mortality rates — while also showing what the challenges are to taking numbers at face value. This, according to MIT professor Catherine D’Ignazio, makes the site unusual, and valuable.</p><p>“The data never speak for themselves,” says D’Ignazio, referring to the general problem of finding reliable numbers about women’s lives. “There are always humans and institutions speaking for the data, and different people have their own agendas. The data are never innocent.”</p><p>Now D’Ignazio, an assistant professor in MIT’s Department of Urban Studies and Planning, has taken a deeper look at this issue in a new book, co-authored with Lauren Klein, an associate professor of English and quantitative theory and methods at Emory University. In the book, “<a href=”” target=”_blank”>Data Feminism</a>,” published this month by the MIT Press, the authors use the lens of intersectional feminism to scrutinize how data science reflects the social structures it emerges from.</p><p>“Intersectional feminism examines unequal power,” write D’Ignazio and Klein, in the book’s introduction. “And in our contemporary world, data is power too. Because the power of data is wielded unjustly, it must be challenged and changed.”</p><p><strong>The 4 percent problem</strong></p><p>To see a clear case of power relations generating biased data, D’Ignazio and Klein note, consider research led by MIT’s own Joy Buolamwini, who as a graduate student in a class studying facial-recognition programs, observed that the software in question could not “see” her face. Buolamwini found that for the facial-recognition system in question, the software was based on a set of faces which were 78 percent male and 84 percent white; only 4 percent were female and dark-skinned, like herself.&nbsp;</p><p>Subsequent media coverage of Buolamwini’s work, D’Ignazio and Klein write, contained “a hint of shock.” But the results were probably less surprising to those who are not white males, they think.&nbsp;&nbsp;</p><p>“If the past is racist, oppressive, sexist, and biased, and that’s your training data, that is what you are tuning for,” D’Ignazio says.</p><p>Or consider another example, from tech giant Amazon, which tested an automated system that used AI to sort through promising CVs sent in by job applicants. One problem: Because a high percentage of company employees were men, the algorithm favored men’s names, other things being equal.&nbsp;</p><p>“They thought this would help [the] process, but of course what it does is train the AI [system] to be biased toward women, because they themselves have not hired that many women,” D’Ignazio observes.</p><p>To Amazon’s credit, it did recognize the problem. Moreover, D’Ignazio notes, this kind of issue is a problem that can be addressed. “Some of the technologies can be reformed with a more participatory process, or better training data. … If we agree that’s a good goal, one path forward is to adjust your training set and include more people of color, more women.”</p><p><strong>“Who’s on the team? Who had the idea? Who’s benefiting?” </strong></p><p>Still, the question of who participates in data science is, as the authors write, “the elephant in the server room.” As of 2011, only 26 percent of all undergraduates receiving computer science degrees in the U.S. were women. That is not only a low figure, but actually a decline from past levels: In 1985, 37 percent of computer science graduates were women, the highest mark on record.</p><p>As a result of the lack of diversity in the field, D’Ignazio and Klein believe, many data projects are radically limited in their ability to see all facets of the complex social situations they purport to measure.&nbsp;</p><p>“We want to try to tune people in to these kinds of power relationships and why they matter deeply,” D’Ignazio says. “Who’s on the team? Who had the idea? Who’s benefiting from the project? Who’s potentially harmed by the project?”</p><p>In all, D’Ignazio and Klein outline seven principles of data feminism, from examining and challenging power, to rethinking binary systems and hierarchies, and embracing pluralism. (Those statistics about gender and computer science graduates are limited, they note, by only using the “male” and “female” categories, thus excluding people who identify in different terms.)</p><p>People interested in data feminism, the authors state, should also “value multiple forms of knowledge,” including firsthand knowledge that may lead us to question seemingly official data. Also, they should always consider the context in which data are generated, and “make labor visible” when it comes to data science. This last principle, the researchers note, speaks to the problem that even when women and other excluded people contribute to data projects, they often receive less credit for their work.</p><p>For all the book’s critique of existing systems, programs, and practices, D’Ignazio and Klein are also careful to include examples of positive, successful efforts, such as the WomanStats project, which has grown and thrived over two decades.</p><p>“For people who are data people but are new to feminism, we want to provide them with a very accessible introduction, and give them concepts and tools they can use in their practice,” D’Ignazio says. “We’re not imagining that people already have feminism in their toolkit. On the other hand, we are trying to speak to folks who are very tuned in to feminism or social justice principles, and highlight for them the ways data science is both problematic, but can be marshalled in the service of justice.”</p>
Catherine D’Ignazio is the co-author of a new book, “Data Feminism,” published by MIT Press in March 2020. Image: Diana Levine and MIT Press

“Doing machine learning the right way” Professor Aleksander Madry strives to build machine-learning models that are more reliable, understandable, and robust. Sat, 07 Mar 2020 23:59:59 -0500 Rob Matheson | MIT News Office <p>The work of MIT computer scientist Aleksander Madry is fueled by one core mission: “doing machine learning the right way.”</p><p>Madry’s research centers largely on making machine learning — a type of artificial intelligence — more accurate, efficient, and robust against errors. In his classroom and beyond, he also worries about questions of ethical computing, as we approach an age where artificial intelligence will have great impact on many sectors of society.</p><p>“I want society to truly embrace machine learning,” says Madry, a recently tenured professor in the Department of Electrical Engineering and Computer Science. “To do that, we need to figure out how to train models that people can use safely, reliably, and in a way that they understand.”</p><p>Interestingly, his work with machine learning dates back only a couple of years, to shortly after he joined MIT in 2015. In that time, his research group has published several critical papers demonstrating that certain models can be easily tricked to produce inaccurate results — and showing how to make them more robust.</p><p>In the end, he aims to make each model’s decisions more interpretable by humans, so researchers can peer inside to see where things went awry. At the same time, he wants to enable nonexperts to deploy the improved models in the real world for, say, helping diagnose disease or control driverless cars.</p><p>“It’s not just about trying to crack open the machine-learning black box. I want to open it up, see how it works, and pack it back up, so people can use it without needing to understand what’s going on inside,” he says.</p><p><strong>For the love of algorithms</strong></p><p>Madry was born in Wroclaw, Poland, where he attended the University of Wroclaw as an undergraduate in the mid-2000s. While he harbored interest in computer science and physics, “I actually never thought I’d become a scientist,” he says.</p><p>An avid video gamer, Madry initially enrolled in the computer science program with intentions of programming his own games. But in joining friends in a few classes in theoretical computer science and, in particular, theory of algorithms, he fell in love with the material. Algorithm theory aims to find efficient optimization procedures for solving computational problems, which requires tackling difficult mathematical questions. “I realized I enjoy thinking deeply about something and trying to figure it out,” says Madry, who wound up double-majoring in physics and computer science.</p><p>When it came to delving deeper into algorithms in graduate school, he went to his first choice: MIT. Here, he worked under both Michel X. Goemans, who was a major figure in applied math and algorithm optimization, and Jonathan A. Kelner, who had just arrived to MIT as a junior faculty working in that field. For his PhD dissertation, Madry developed algorithms that solved a number of longstanding problems in graph algorithms, earning the 2011 George M. Sprowls Doctoral Dissertation Award for the best MIT doctoral thesis in computer science.</p><p>After his PhD, Madry spent a year as a postdoc at Microsoft Research New England, before teaching for three years at the Swiss Federal Institute of Technology Lausanne — which Madry calls “the Swiss version of MIT.” But his alma mater kept calling him back: “MIT has the thrilling energy I was missing. It’s in my DNA.”</p><p><strong>Getting adversarial</strong></p><p>Shortly after joining MIT, Madry found himself swept up in a novel science: machine learning. In particular, he focused on understanding the re-emerging paradigm of deep learning. That’s an artificial-intelligence application that uses multiple computing layers to extract high-level features from raw input — such as using pixel-level data to classify images. MIT’s campus was, at the time, buzzing with new innovations in the domain.</p><p>But that begged the question: Was machine learning all hype or solid science? “It seemed to work, but no one actually understood how and why,” Madry says.</p><p>Answering that question set his group on a long journey, running experiment after experiment on deep-learning models to understand the underlying principles. A major milestone in this journey was an influential paper they published in 2018, developing a methodology for making machine-learning models more resistant to “adversarial examples.” Adversarial examples are slight perturbations to input data that are imperceptible to humans — such as changing the color of one pixel in an image — but cause a model to make inaccurate predictions. They illuminate a major shortcoming of existing machine-learning tools.</p><p>Continuing this line of work, Madry’s group showed that the existence of these mysterious adversarial examples may contribute to how machine-learning models make decisions. In particular, models designed to differentiate images of, say, cats and dogs, make decisions based on features that do not align with how humans make classifications. Simply changing these features can make the model consistently misclassify cats as dogs, without changing anything in the image that’s really meaningful to humans.</p><p>Results indicated some models — which may be used to, say, identify abnormalities in medical images or help autonomous cars identify objects in the road —&nbsp;aren’t exactly up to snuff. “People often think these models are superhuman, but they didn’t actually solve the classification problem we intend them to solve,” Madry says. “And their complete vulnerability to adversarial examples was a manifestation of that fact. That was an eye-opening finding.”</p><p>That’s why Madry seeks to make machine-learning models more interpretable to humans. New models he’s developed show how much certain pixels in images the system is trained on can influence the system’s predictions. Researchers can then tweak the models to focus on pixels clusters more closely correlated with identifiable features — such as detecting an animal’s snout, ears, and tail. In the end, that will help make the models more humanlike —&nbsp;or “superhumanlike” —&nbsp;in their decisions. To further this work, Madry and his colleagues recently founded the <a href=””>MIT Center for Deployable Machine Learning</a>, a collaborative research effort within the <a href=”” target=”_blank”>MIT Quest for Intelligence</a> that is working toward building machine-learning tools ready for real-world deployment.&nbsp;</p><p>“We want machine learning not just as a toy, but as something you can use in, say, an autonomous car, or health care. Right now, we don’t understand enough to have sufficient confidence in it for those critical applications,” Madry says.</p><p><strong>Shaping education and policy</strong></p><p>Madry views artificial intelligence and decision making (“AI+D” is one of the three <a href=”″>new academic units</a> in the Department of Electrical Engineering and Computer Science) as “the interface of computing that’s going to have the biggest impact on society.”</p><p>In that regard, he makes sure to expose his students to the human aspect of computing. In part, that means considering consequences of what they’re building. Often, he says, students will be overly ambitious in creating new technologies, but they haven’t thought through potential ramifications on individuals and society. “Building something cool isn’t a good enough reason to build something,” Madry says. “It’s about thinking about not if we can build something, but if we should build something.”</p><p>Madry has also been engaging in conversations about laws and policies to help regulate machine learning. A point of these discussions, he says, is to better understand the costs and benefits of unleashing machine-learning technologies on society.</p><p>“Sometimes we overestimate the power of machine learning, thinking it will be our salvation. Sometimes we underestimate the cost it may have on society,” Madry says. “To do machine learning right, there’s still a lot still left to figure out.”</p>
Alexander Madry Image: Ian MacLellan

Showing robots how to do your chores By observing humans, robots learn to perform complex tasks, such as setting a table. Thu, 05 Mar 2020 23:59:59 -0500 Rob Matheson | MIT News Office <p>Training interactive robots may one day be an easy job for everyone, even those without programming expertise. Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. In the workplace, you could train robots like new employees, showing them how to perform many duties.</p><p>Making progress on that vision, MIT researchers have designed a system that lets these types of robots learn complicated tasks that would otherwise stymie them with too many confusing rules. One such task is setting a dinner table under certain conditions. &nbsp;</p><p>At its core, the researchers’ “Planning with Uncertain Specifications” (PUnS) system gives robots the humanlike planning ability to simultaneously weigh many ambiguous —&nbsp;and potentially contradictory —&nbsp;requirements to reach an end goal. In doing so, the system always chooses the most likely action to take, based on a “belief” about some probable specifications for the task it is supposed to perform.</p><p>In their work, the researchers compiled a dataset with information about how eight objects — a mug, glass, spoon, fork, knife, dinner plate, small plate, and bowl — could be placed on a table in various configurations. A robotic arm first observed randomly selected human demonstrations of setting the table with the objects. Then, the researchers tasked the arm with automatically setting a table in a specific configuration, in real-world experiments and in simulation, based on what it had seen.</p><p>To succeed, the robot had to weigh many possible placement orderings, even when items were purposely removed, stacked, or hidden. Normally, all of that would confuse robots too much. But the researchers’ robot made no mistakes over several real-world experiments, and only a handful of mistakes over tens of thousands of simulated test runs. &nbsp;</p><p>“The vision is to put programming in the hands of domain experts, who can program robots through intuitive ways, rather than describing orders to an engineer to add to their code,” says first author Ankit Shah, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and the Interactive Robotics Group, who emphasizes that their work is just one step in fulfilling that vision. “That way, robots won’t have to perform preprogrammed tasks anymore. Factory workers can teach a robot to do multiple complex assembly tasks. Domestic robots can learn how to stack cabinets, load the dishwasher, or set the table from people at home.”</p><p>Joining Shah on the paper are AeroAstro and Interactive Robotics Group graduate student Shen Li and Interactive Robotics Group leader Julie Shah, an associate professor in AeroAstro and the Computer Science and Artificial Intelligence Laboratory.</p><p></p> <p><strong>Bots hedging bets</strong></p><p>Robots are fine planners in tasks with clear “specifications,” which help describe the task the robot needs to fulfill, considering its actions, environment, and end goal. Learning to set a table by observing demonstrations, is full of uncertain specifications. Items must be placed in certain spots, depending on the menu and where guests are seated, and in certain orders, depending on an item’s immediate availability or social conventions. Present approaches to planning are not capable of dealing with such uncertain specifications.</p><p>A popular approach to planning is “reinforcement learning,” a trial-and-error machine-learning technique that rewards and penalizes them for actions as they work to complete a task. But for tasks with uncertain specifications, it’s difficult to define clear rewards and penalties. In short, robots never fully learn right from wrong.</p><p>The researchers’ system, called PUnS (for Planning with Uncertain Specifications), enables a robot to hold a “belief” over a range of possible specifications. The belief itself can then be used to dish out rewards and penalties. “The robot is essentially hedging its bets in terms of what’s intended in a task, and takes actions that satisfy its belief, instead of us giving it a clear specification,” Ankit Shah says.</p><p>The system is built on “linear temporal logic” (LTL), an expressive language that enables robotic reasoning about current and future outcomes. The researchers defined templates in LTL that model various time-based conditions, such as what must happen now, must eventually happen, and must happen until something else occurs. The robot’s observations of 30 human demonstrations for setting the table yielded a probability distribution over 25 different LTL formulas. Each formula encoded a slightly different preference — or specification — for setting the table. That probability distribution becomes its belief.</p><p>“Each formula encodes something different, but when the robot considers various combinations of all the templates, and tries to satisfy everything together, it ends up doing the right thing eventually,” Ankit Shah says.</p><p><strong>Following criteria</strong></p><p>The researchers also developed several criteria that guide the robot toward satisfying the entire belief over those candidate formulas. One, for instance, satisfies the most likely formula, which discards everything else apart from the template with the highest probability. Others satisfy the largest number of unique formulas, without considering their overall probability, or they satisfy several formulas that represent highest total probability. Another simply minimizes error, so the system ignores formulas with high probability of failure.</p><p>Designers can choose any one of the four criteria to preset before training and testing. Each has its own tradeoff between flexibility and risk aversion. The choice of criteria depends entirely on the task. In safety critical situations, for instance, a designer may choose to limit possibility of failure. But where consequences of failure are not as severe, designers can choose to give robots greater flexibility to try different approaches.</p><p>With the criteria in place, the researchers developed an algorithm to convert the robot’s belief — the probability distribution pointing to the desired formula — into an equivalent reinforcement learning problem. This model will ping the robot with a reward or penalty for an action it takes, based on the specification it’s decided to follow.</p><p>In simulations asking the robot to set the table in different configurations, it only made six mistakes out of 20,000 tries. In real-world demonstrations, it showed behavior similar to how a human would perform the task. If an item wasn’t initially visible, for instance, the robot would finish setting the rest of the table without the item. Then, when the fork was revealed, it would set the fork in the proper place. “That’s where flexibility is very important,” Ankit Shah says. “Otherwise it would get stuck when it expects to place a fork and not finish the rest of table setup.”</p><p>Next, the researchers hope to modify the system to help robots change their behavior based on verbal instructions, corrections, or a user’s assessment of the robot’s performance. “Say a person demonstrates to a robot how to set a table at only one spot. The person may say, ‘do the same thing for all other spots,’ or, ‘place the knife before the fork here instead,’” Ankit Shah says. “We want to develop methods for the system to naturally adapt to handle those verbal commands, without needing additional demonstrations.”&nbsp;&nbsp;</p>
Roboticists are developing automated robots that can learn new tasks solely by observing humans. At home, you might someday show a domestic robot how to do routine chores. Image: Christine Daniloff, MIT

A new model of vision Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly. Wed, 04 Mar 2020 14:00:00 -0500 Anne Trafton | MIT News Office <p>When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision.</p><p>Scientists who study the brain have tried to replicate this phenomenon using computer models of vision, but so far, leading models only perform much simpler tasks such as picking out an object or a face against a cluttered background. Now, a team led by MIT cognitive scientists has produced a computer model that captures the human visual system’s ability to quickly generate a detailed scene description from an image, and offers some insight into how the brain achieves this.</p><p>“What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world,” says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).</p><p>The new model posits that when the brain receives visual input, it quickly performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face or other object. This type of model, known as efficient inverse graphics (EIG), also correlates well with electrical recordings from face-selective regions in the brains of nonhuman primates, suggesting that the primate visual system may be organized in much the same way as the computer model, the researchers say.</p><p>Ilker Yildirim, a former MIT postdoc who is now an assistant professor of psychology at Yale University, is the lead author of the paper, which appears today in <em>Science Advances</em>. Tenenbaum and Winrich Freiwald, a professor of neurosciences and behavior at Rockefeller University, are the senior authors of the study. Mario Belledonne, a graduate student at Yale, is also an author.</p><p><strong>Inverse graphics</strong></p><p>Decades of research on the brain’s visual system has studied, in great detail, how light input onto the retina is transformed into cohesive scenes. This understanding has helped artificial intelligence researchers develop computer models that can replicate aspects of this system, such as recognizing faces or other objects.</p><p>“Vision is the functional aspect of the brain that we understand the best, in humans and other animals,” Tenenbaum says. “And computer vision is one of the most successful areas of AI at this point. We take for granted that machines can now look at pictures and recognize faces very well, and detect other kinds of objects.”</p><p>However, even these sophisticated artificial intelligence systems don’t come close to what the human visual system can do, Yildirim says.</p><p>“Our brains don’t just detect that there’s an object over there, or recognize and put a label on something,” he says. “We see all of the shapes, the geometry, the surfaces, the textures. We see a very rich world.”</p><p>More than a century ago, the physician, physicist, and philosopher Hermann von Helmholtz theorized that the brain creates these rich representations by reversing the process of image formation. He hypothesized that the visual system includes an image generator that would be used, for example, to produce the faces that we see during dreams. Running this generator in reverse would allow the brain to work backward from the image and infer what kind of face or other object would produce that image, the researchers say.</p><p>However, the question remained: How could the brain perform this process, known as inverse graphics, so quickly? Computer scientists have tried to create algorithms that could perform this feat, but the best previous systems require many cycles of iterative processing, taking much longer than the 100 to 200 milliseconds the brain requires to create a detailed visual representation of what you’re seeing. Neuroscientists believe perception in the brain can proceed so quickly because it is implemented in a mostly feedforward pass through several hierarchically organized layers of neural processing.</p><p>The MIT-led team set out to build a special kind of deep neural network model to show how a neural hierarchy can quickly infer the underlying features of a scene — in this case, a specific face. In contrast to the standard deep neural networks used in computer vision, which are trained from labeled data indicating the class of an object in the image, the researchers’ network is trained from a model that reflects the brain’s internal representations of what scenes with faces can look like.</p><p>Their model thus learns to reverse the steps performed by a computer graphics program for generating faces. These graphics programs begin with a three-dimensional representation of an individual face and then convert it into a two-dimensional image, as seen from a particular viewpoint. These images can be placed on an arbitrary background image. The researchers theorize that the brain’s visual system may do something similar when you dream or conjure a mental image of someone’s face.</p><p>The researchers trained their deep neural network to perform these steps in reverse — that is, it begins with the 2D image and then adds features such as texture, curvature, and lighting, to create what the researchers call a “2.5D” representation. These 2.5D images specify the shape and color of the face from a particular viewpoint. Those are then converted into 3D representations, which don’t depend on the viewpoint.</p><p>“The model gives a systems-level account of the processing of faces in the brain, allowing it to see an image and ultimately arrive at a 3D object, which includes representations of shape and texture, through this important intermediate stage of a 2.5D image,” Yildirim says.</p><p><strong>Model performance</strong></p><p>The researchers found that their model is consistent with data obtained by studying certain regions in the brains of macaque monkeys. In a study published in 2010, Freiwald and Doris Tsao of Caltech recorded the activity of neurons in those regions and analyzed how they responded to 25 different faces, seen from seven different viewpoints. That study revealed three stages of higher-level face processing, which the MIT team now hypothesizes correspond to three stages of their inverse graphics model: roughly, a 2.5D viewpoint-dependent stage; a stage that bridges from 2.5 to 3D; and a 3D, viewpoint-invariant stage of face representation.</p><p>“What we show is that both the quantitative and qualitative response properties of those three levels of the brain seem to fit remarkably well with the top three levels of the network that we’ve built,” Tenenbaum says.</p><p>The researchers also compared the model’s performance to that of humans in a task that involves recognizing faces from different viewpoints. This task becomes harder when researchers alter the faces by removing the face’s texture while preserving its shape, or distorting the shape while preserving relative texture. The new model’s performance was much more similar to that of humans than computer models used in state-of-the-art face-recognition software, additional evidence that this model may be closer to mimicking what happens in the human visual system.</p><p>“This work is exciting because it introduces interpretable stages of intermediate representation into a feedforward neural network model of face recognition,” says Nikolaus Kriegeskorte, a professor of psychology and neuroscience at Columbia University, who was not involved in the research. “Their approach merges the classical idea that vision inverts a model of how the image was generated, with modern deep feedforward networks. It’s very interesting that this model better explains neural representations and behavioral responses.”</p><p>The researchers now plan to continue testing the modeling approach on additional images, including objects that aren’t faces, to investigate whether inverse graphics might also explain how the brain perceives other kinds of scenes. In addition, they believe that adapting this approach to computer vision could lead to better-performing AI systems.</p><p>“If we can show evidence that these models might correspond to how the brain works, this work could lead computer vision researchers to take more seriously and invest more engineering resources in this inverse graphics approach to perception,” Tenenbaum says. “The brain is still the gold standard for any kind of machine that sees the world richly and quickly.”</p><p>The research was funded by the Center for Brains, Minds, and Machines at MIT, the National Science Foundation, the National Eye Institute, the Office of Naval Research, the New York Stem Cell Foundation, the Toyota Research Institute, and Mitsubishi Electric.</p>
MIT cognitive scientists have developed a computer model of face recognition that performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face. Image: courtesy of the researchers

Demystifying the world of deep networks Researchers discover that no magic is required to explain why deep networks generalize despite going against statistical intuition. Fri, 28 Feb 2020 14:40:01 -0500 Kris Brewer | Center for Brains, Minds and Machines <p>Introductory statistics courses teach us that, when fitting a model to some data, we should have more data than free parameters to avoid the danger of overfitting — fitting noisy data too closely, and thereby failing to fit new data. It is surprising, then, that in modern deep learning the practice is to have orders of magnitude more parameters than data. Despite this, deep networks show good predictive performance, and in fact do better the more parameters they have. Why would that be?</p> <p>It has been known for some time that good performance in machine learning comes from controlling the complexity of networks, which is not just a simple function of the number of free parameters. The complexity of a classifier, such as a neural network, depends on measuring the “size” of the space of functions that this network represents, with multiple technical measures previously suggested: Vapnik–Chervonenkis dimension, covering numbers, or Rademacher complexity, to name a few. Complexity, as measured by these notions, can be controlled during the learning process by imposing a constraint on the norm of the parameters — in short, on how “big” they can get. The surprising fact is that no such explicit constraint seems to be needed in training deep networks. Does deep learning lie outside of the classical learning theory? Do we need to rethink the foundations?</p> <p>In a new <em>Nature Communications</em> paper, “Complexity Control by Gradient Descent in Deep Networks,” a team from the Center for Brains, Minds, and Machines led by Director Tomaso Poggio, the Eugene McDermott Professor in the MIT Department of Brain and Cognitive Sciences, has shed some light on this puzzle by addressing the most practical and successful applications of modern deep learning: classification problems.</p> <p>“For classification problems, we observe that in fact the parameters of the model do not seem to converge, but rather grow in size indefinitely during gradient descent. However, in classification problems only the normalized parameters matter — i.e., the direction they define, not their size,” says co-author and MIT PhD candidate Qianli Liao. “The not-so-obvious thing we showed is that the commonly used gradient descent on the unnormalized parameters induces the desired complexity control on the normalized ones.”</p> <p>“We have known for some time in the case of regression for shallow linear networks, such as kernel machines, that iterations of gradient descent provide an implicit, vanishing regularization effect,” Poggio says. “In fact, in this simple case we probably know that we get the best-behaving maximum-margin, minimum-norm solution. The question we asked ourselves, then, was: Can something similar happen for deep networks?”</p> <p>The researchers found that it does. As co-author and MIT postdoc Andrzej Banburski explains, “Understanding convergence in deep networks shows that there are clear directions for improving our algorithms. In fact, we have already seen hints that controlling the rate at which these unnormalized parameters diverge allows us to find better performing solutions and find them faster.”</p> <p>What does this mean for machine learning? There is no magic behind deep networks. The same theory behind all linear models is at play here as well. This work suggests ways to improve deep networks, making them more accurate and faster to train.</p>
MIT researchers (left to right) Qianli Liao, Tomaso Poggio, and Andrzej Banburski stand with their equations. Image: Kris Brewer

Leave A Reply

Your email address will not be published.