MIT News – Artificial intelligence
<p><a href=”https://arxiv.org/abs/1910.01287″>One paper</a> builds off last year’s <a href=”http://news.mit.edu/2019/new-robot-hand-gripper-soft-and-strong-0315″>research</a> from MIT and Harvard University, where a team developed a soft and strong robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight. </p> <p>To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch. </p> <p>When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.</p> <p>“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.” </p> <p>In <a href=”https://arxiv.org/pdf/1910.01287.pdf”>a second paper</a>, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body). </p> <p>The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy. </p> <p>“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.” </p> <p><strong>Magic ball senses </strong></p> <p>The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure. </p> <p>While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors. </p> <p>When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again. </p> <p>In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage. </p> <p>The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies. </p> <p>Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity. </p> <p>Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation. </p> <p><strong>GelFlex</strong></p> <p>In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.</p> <p>To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger. </p> <p>The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum. </p> <p>During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly. </p> <p>In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.</p> <p>Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.</p> <p></p>
</ul> <p>Machine learning classifiers detected the gestures using the wearable sensors. Unsupervised classifiers processed the muscle and motion data and clustered it in real time to learn how to separate gestures from other motions. A neural network also predicted wrist flexion or extension from forearm muscle signals. </p> <p>The system essentially calibrates itself to each person’s signals while they’re making gestures that control the robot, making it faster and easier for casual users to start interacting with robots.</p> <p>In the future, the team hopes to expand the tests to include more subjects. And while the movements for Conduct-A-Bot cover common gestures for robot motion, the researchers want to extend the vocabulary to include more continuous or user-defined gestures. Eventually, the hope is to have the robots learn from these interactions to better understand the tasks and provide more predictive assistance or increase their autonomy. </p> <p>“This system moves one step closer to letting us work seamlessly with robots so they can become more effective and intelligent tools for everyday tasks,” says DelPreto. “As such collaborations continue to become more accessible and pervasive, the possibilities for synergistic benefit continue to deepen.” </p> <p>DelPreto and Rus presented the paper virtually earlier this month at the ACM/IEEE International Conference on Human Robot Interaction.</p>
<br />
In total, one OFA, the researchers found, can comprise more than 10 quintillion — that’s a 1 followed by 19 zeroes — architectural settings, covering probably all platforms ever needed. But training the OFA and searching it ends up being far more efficient than spending hours training each neural network per platform. Moreover, OFA does not compromise accuracy or inference efficiency. Instead, it provides state-of-the-art ImageNet accuracy on mobile devices. And, compared with state-of-the-art industry-leading CNN models , the researchers say OFA provides 1.5-2.6 times speedup, with superior accuracy. <br />
<br />
“That’s a breakthrough technology,” Han says. “If we want to run powerful AI on consumer devices, we have to figure out how to shrink AI down to size.”</p> <p>“The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices,” says Chuang Gan, a researcher at the MIT-IBM Watson AI Lab and co-author of the paper.</p> <p>“If rapid progress in AI is to continue, we need to reduce its environmental impact,” says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. “The upside of developing methods to make AI models smaller and more efficient is that the models may also perform better.”</p>
<br />
Rus is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing. Her research in robotics, artificial intelligence, and data science focuses primarily on developing the science and engineering of autonomy, with the long-term objective of enabling a future where machines are integrated into daily life to support both cognitive and physical tasks. The applications of her work are broad and include transportation, manufacturing, medicine, and urban planning. <br />
<br />
More than a dozen MIT faculty and alumni have served on PCAST during past presidential administrations. These include former MIT president Charles Vest; Institute Professors Phillip Sharp and John Deutch; Ernest Moniz, professor of physics and former U.S. Secretary of Energy; and Eric Lander, director of the Broad Institute of MIT and Harvard and professor of biology, who co-chaired PCAST during the Obama administration. Previous councils have offered advice on topics ranging from <a href=”https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_big_data_and_privacy_-_may_2014.pdf” target=”_blank”>data privacy</a> and <a href=”https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_2017_nni_review_final.pdf” target=”_blank”>nanotechnology</a> to <a href=”https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/PCAST_worforce_edIT_Oct-2014.pdf” target=”_blank”>job training</a> and <a href=”https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/pcast-engage-to-excel-final_2-25-12.pdf” target=”_blank”>STEM education</a>.</p>
</p> <p>With DCU serving as a pilot customer, the founders got a “crash course” in the credit union industry, Kashyap says. From there they began a calculated expansion to ensure they didn’t grow faster than Posh’s revenue allowed, freeing them from having to raise venture capital.</p><p>The disciplined growth strategy at times forced Posh to get creative. Last year, as the founders were looking to build out new features and grow their team, they secured about $1.5 million in prepayments from eight credit unions in exchange for discounts on their service along with a peer-driven profit-sharing incentive. In total, the company has raised $2.5 million using that strategy.</p><p>Now on more secure financial footing, the founders are poised to accelerate Posh’s growth.</p><p><strong>Pushing the boundaries</strong></p><p>Even referring to today’s automated messaging platforms as chatbots seems generous. Most of the ones on the market today are only designed to understand what a user is asking for, something known as intent recognition.</p><p>The result is that many of the virtual agents in our lives, from the robotic telecom operator to Amazon’s Alexa to the remote control, take directions but struggle to hold a conversation. Posh’s chatbots go beyond intent recognition, using what Kashyap calls context understanding to figure out what users are saying based on the history of the conversation. The founders have a patent pending for the approach.</p><p>“[Context understanding] allows us to more intelligently understand user inputs and handle things like changes in topics without having the bots break,” Kashyap says. “One of our biggest pet peeves was, in order to have a successful interaction with a bot, you as a user have to be very unnatural sometimes to convey what you want to convey or the bot won’t understand you.”</p><p>Kashyap says context understanding is a lot easier to accomplish when designing bots for specific industries. That’s why Posh’s founders decided to start by focusing on credit unions.</p><p>“The platforms on the market today are almost spreading themselves too thin to make a deep impact in a particular vertical,” Kashyap says. “If you have banks and telecos and health care companies all using the same [chatbot] service, it’s as if they’re all sharing the same customer service rep. It’s difficult to have one person trained across all of these domains meaningfully.”</p><p>To onboard a new credit union, Posh uses the customer’s conversational data to train its deep learning model.</p><p>“The bots continue to train even after they go live and have actual conversations,” Kashyap says. “We’re always improving it; I don’t think we’ll ever deploy a bot and say it’s done.”</p><p>Customers can use Posh’s bots for online chats, voice calls, SMS messaging, and through third party channels like Slack, WhatsApp, and Amazon Echo. Posh also offers an analytics platform to help customers analyze what users are calling about.</p><p>For now, Kashyap says he’s focused on quadrupling the number of credit unions using Posh over the next year. Then again, the founders’ have never let short term business goals cloud their larger vision for the company.</p><p>“Our perspective has always been that [the robot assistant] Jarvis from ‘Iron Man’ and the AI from the movie ‘Her’ are going to be reality sometime soon,” Kashyap says. “Someone has to pioneer the ability for bots to have contextual awareness and memory persistence. I think there’s a lot more that needs to go into bots overall, but we felt by pushing the boundaries a little bit, we’d succeed where other bots would fail, and ultimately people would like to use our bots more than others.”</p>