http://tinyurl.com/crasar-NPRgreece
Exciting things continue to happen with EMILY- there’s an improved EMILY, a team of computer science, aerospace, and industrial engineering students are working on smartEMILY, and 37 undergraduates in senior capstone design are working on Computing For Disasters topic! Tony Mulligan, CEO of Hydronalix, creator of EMILY, and Roboticists Without Borders member, is heading back to Greece this weekend to check in with the teams and we look forward to his updates.
Everything is going great– except that 410 refugees have died so far this year and the resort-based tourism economy of Lesvos has been wrecked. Our thoughts and prayers go out to the refugees, the generous and kind citizens of Lesvos, and to the NGOs who continue to do the best they can.
EMILY has been improved. Notice that her video and thermal cameras are now mounted flush so that if a large number of refugees need to hang on to her, they won’t try to grab and break the camera.
The Hellenic Coast Guard loves their EMILY so much, she’s on their Wikipedia page! Check out https://en.wikipedia.org/wiki/Hellenic_Coast_Guard
Back here in Texas, we are continuing the theme of participatory research, engaging graduate and undergraduate students in generating new concepts for lifeguard assistant robots:
smartEMILY. The students in my CSCE 635 Introduction to AI Robotics class are working on making EMILY easier to use. As I wrote in my 1/12/2016 blog “The refugee crossings present a new scenario- how to handle a large number of people in the water. Some may be in different levels of distress, elderly or children, or unconscious. One solution is to use EMILY to go to the people who are still able to grab on, while the lifeguards swim to aid the people who need special professional attention. Chief John Sims from Rural/Metro Fire Department, Pima, (our 4th team member) is anticipating situations where rescuers can concentrate on saving children and unconscious victims while sending EMILY to the conscious and responsive people.” We’re calling this idea “smartEMILY” and the students from computer science, aerospace engineering, and industrial engineering are designing the artificial intelligence needed for robust operation. I can’t wait to test on EMILY in April.
Two of the projects in undergraduate students in our CSCE 482 Senior Capstone design class on “computing for disasters” are also related to EMILY and two others are on other aspects of humanitarian work.
One project was inspired by our meeting with Dr. Zoi Livaditou https://m.facebook.com/zoi.livaditou who is working with the Hellenic Coast Guard. Dr. Livaditou, a medical doctor, has a cassette tape of directions to play over a megaphone to the refugees in their language—yes, a cassette tape. She was so excited at the idea of using EMILY’s two-way radio to play her taped phrases. Three groups of students (EMILYlingo, Fast Phrase, and Team Dragon) and are working on a smart phone app that she can get different speakers in different languages to record phrases and then easily call them up. It should be faster to find the right phrase, easier to add phrases, and far more convenient.
A more futuristic variant that would be perfect for a large flexible display mounted on EMILY (the stuff of my dreams!) is to display what you are trying to tell the refugees to do. For example, how to tie a cleat hitch so their boat can be towed. Even just to reinforce how to steer the boat right or left, so the person hears and sees what the directions are. Two teams, Team Tanks and Team TBD, are working on this.
A very promising non-robotic project is the Refugee Predictor. A student team is writing an inductive machine learning program to predict the of boats, approximate time of arrival, and location for the next day’s data. They are hoping that there is a pattern in the weather, water, time of sunrise/sunset, and any other relevant data for the past year that explains why some days there are 20 boats hitting Skala, and other days 8 boats going to Mytelini. What a great use of machine learning!
The other Computing for Disasters project is there to help with data management by us and other NGOs. In particular, if EMILY is on the water for a morning, the “action” may only be a few minutes. In order to generate a report, someone has to edit the video clip. The students on Team Snips are working to create a website where any of the NGOs can upload a file plus one or more timestamps, and then it will cut out a snippet of a specified length.
We are seeking funding to buy our own EMILY and Fotokite, then return to Greece to continue to learn and to partner with Prof. Milt Statheropoulos’ group at the National Technical University of Athens.
I am still hoping to raise another $2,504 to cover the unpaid expenses from the January trip so please donate at https://www.gofundme.com/Friends-of-CRASAR
We are just getting word of several building collapses in the Taiwan earthquake, here are some thoughts and data on how robots have been used in previous collapses…
Ground robots may be of the most value. In a situation like this where the building has collapsed, small robots will likely to get into voids and go deeper than the 18-20 feet that a camera on a probe or a boroscope can go into. Note that canines would normally be used first to indicate that people are alive (if there was any doubt about occupancy). The ground robots would be used to try to localize the survivors AND allow the rescue team to at the same time understand the internal layout of the structure. If they can better understand the internal layout of the “pixie sticks” of the rubble, they can extricate the victims faster and with less chance of triggering a secondary collapse. Most of the ground robots used, such as the Inuktun series which have been used the most, have 2 way audio so the responders can talk to the victims.
With our colleague Eric Rasmussen, MD FACP, we’ve experimented with how a small robot can carry tubing allowing a survivor to have water. With members of Texas Task Force 1 medical team, we’ve experimented with how doctors can use the robot to communicate with the survivor, assess their injuries, and engage the survivors- as it may take 4-10 hours to be extracted.
Similar situations where ground robots have been used for multi-story commercial building collapses are:
Ground robots are often not used in earthquakes, such as the Japanese earthquake, because of building resilience and codes. Residential homes are small, often wood, and fairly easy to locate victims with canine teams and then extracting. Adding a robot doesn’t really speed up anything.
UAVs can give an overview of a collapse, but generally it has been the “inside” view that responders need the most and can’t get any other way.
“The Gap” represents a type of no-mans land for lifeguards. It’s the area that the deeper water patrol boats (such as the Hellenic Coast Guard cutters use in the channel between Turkey and Greece and the smaller rigid hull inflatable boats used by NGOs) cannot enter due to draft restrictions but is too far out for lifeguards on shore to wade and has to be approached by a swimming lifeguard. If the boat capsizes, people fall or misjudge the depth and jump off, or the boat runs aground, the lifeguards in patrol boats are not in position to help. The lifeguards on land have to swim floatation devices out, taking valuable time and risking panicking people trying to climb on their heads.
CRASAR and Roboticists Without Borders members are on Lesvos on day 2 of a 10 day deployment to Greece to assist the local Coast Guard and lifeguard organizations in rescuing refugees from drowning. As you may know over 300 refugees have drowned, with 34 bodies found on Jan. 5, seven of which were children. We are deploying two types of robots: the EMILY marine vehicle that is used worldwide to assist lifeguards, a Fotokite, plus ruggedized Sonim phones from our Texas A&M sister center, the Internet2 Technology Evaluation Center. This is my 21st disaster and I’ve never seen such a diversity of NGOs working so well together and such a compassionate local population. It is an honor to think that we could provide them with useful tools to do their amazing and heartbreaking work. As you see in the video below, think of EMILY as a combination of a large life preserver with a battery powered miniature jet ski that a lifeguard can radio control. Based on prior use and talking with the PROEM-AID and PROACTIVA lifeguard teams here, we have identified 4 possible uses for her- 2 of which are standard operating procedures but 2 are new challenges posed by the unique situation here. The lessons learned here would be applicable to other marine catastrophes such as the cruise ships or ferries sinking.
This is a standard use of EMILY. As illustrated in the above video, we demonstrated the EMILY robot to the PROEM-AID lifeguard team from Spain- two of whom are shown here as victims hanging on. 5 or more people can hang on. In rescues from a life boat (versus the shore), EMILY zooms out with a line because lifeguards can pull her back loaded with people faster than she jet-ski back. Plus it is much less scary for victims to have that wallowing sounds, wake, and vibration.
In this video, Chief Fernando Boiteux of the CRASAR team (in blue) demonstrated using a smartphone to view EMILY’s onboard camera, which can switch between visible light and infrared. A member of the PROACTIVA lifeguard team (red, black)- is shown driving EMILY. The lifeguard can direct EMILY to victims out of easy range of sight by using EMILY’s onboard camera.
One area that we hope to collect data on is the use of thermal imagery to help the lifeguard see the victims at night and in high waves. (And our students will be working on algorithms to exploit this new sensor to make EMILY smarter.)
It’s straightforward to send EMILY out to a boat in trouble, tell the people to unclip the line (yes, EMILY has two-way audio) and tie it to their boats. EMILY does this a lot in the Pacific Northwest where kayakers get pummeled by waves on the rocky shore and the rescue boat can’t get close enough. Once the line is on the kayaker, the rescue boat hauls it off the rocks while EMILY zooms back out of the way. This may be very useful at Lesvos because parts of the shore are treacherous.
Given that EMILY has two-way audio, goes 20 MPH, and a long radio-control range (plus the spiffy flag for visibility), PROACTIVE lifeguards envisioned that when a boat that was heading to a bad location, they could use EMILY to guide whoever was piloting the boat to the better beach.
The refugee crossings present a new scenario- how to handle a large number of people in the water. Some may be in different levels of distress, elderly or children, or unconscious. One solution is to use EMILY to go to the people who are still able to grab on, while the lifeguards swim to aid the people who need special professional attention. Chief John Sims from Rural/Metro Fire Department, Pima, (our 4th team member) is anticipating situations where rescuers can concentrate on saving children and unconscious victims while sending EMILY to the conscious and responsive people. We are also going to experiment with the Fotokite, which is NOT considered a UAV by aviation agencies. It is a tethered aerial camera originally developed for safe and easy photo journalism- specifically because tethered aircraft like balloons and kites under certain altitudes are not regulated. I was immediately impressed when I saw it at DroneApps last year. It’s both a solid technology and it can be used where small unmanned aerial systems cannot be used since the flying camera is tethered. One challenge that the lifeguards have is seeing exactly what the situation is and who is in what kind of distress. That could be magnified in the chaos of a capsized boat. Even a 10 or 20 foot view could help rescuers see over the waves and better prioritize their lifesaving actions. I am delighted that Sergei Lupashin and his team scrambled over the holidays to get us one.
We expect to primarily deploy from lifeguard boats that go out to the refugees boats but perhaps from the beach as well. Note that EMILY doesn’t replace a responder, it is one more tool that they can use. It is a mature technology that helped responders save lives since 2012 (see https://lifesaving.com/node/2815). I first saw a prototype in 2010 and Tony Mulligan, creator and CEO of Hydronalix, and Chief Fernando Boiteux from the LA County Fire Department brought EMILY to our Summer Institute on Flooding in 2015- and they are deploying. Chief Boiteux is using his vacation days to come and serve as an expert operator. This is another case of proven mature robot technology that exists but is not getting the attention and adoption it deserves. I hate to make too much of a big deal about our deployment as we still haven’t done anything yet. And it’s not about us, but about helping the selfless work that the Hellenic Coast Guard and NGOs are doing and have been doing through sun and storm, hot and cold. However, this deployment is being funded out of pocket. Even with Roushan Zenooz and Hydronalix donating partial travel costs and Fotokite donating a platform, we are still short. So please consider donating at https://www.gofundme.com/Friends-of-CRASAR to cover the remaining costs (we couldn’t wait any longer). Once we can establish the utility of EMILY, we also hope to raise enough additional money to leave an EMILY (or multiple ones) behind.
This is a 15 minute talk I gave (virtually) at the World Engineering Conference and United Nations World Conference on Disaster Risk and Reduction in Japan. It was a talk for a general audience and while there is nothing new, it does provides
Much of the material is captured in detail in Disaster Robotics, MIT Press.
I was a panelist on The Robot Revolution panel held by the Chicago Council on Global Affairs in conjunction with the Chicago Museum of Science and Industry (and the fantastic exhibit Robot Revolution!) You can see the panel at the YouTube clip here
What struck me was the focus on the ethics of robotics: were they safe? what about weaponization? they are going to take away jobs? Elon Musk and Stephen Hawking say they are dangerous, and so on.
I believe that it is unethical not to use existing (and rapidly improving) unmanned systems for emergency preparedness, response, and recovery. The fear of some futuristic version of artificial intelligence is like having discovered penicillin but saying “wait, what if it mutates and eventually leads to penicillin resistant bacteria a 100 years from now” and then starting an unending set of committees and standards processes. Unmanned systems are tools like fire trucks, ambulances, and safety gear.
Here are just three of the reasons why I believe it is unethical not to use unmanned systems for all aspects of emergency management. These came up when I was being interviewed by TIME Magazine as an Agent of Change for Hyundai (but don’t blame them)!
1. Robots speed up disaster response. The logarithmic heuristic developed by Haas, Kates, and Bowden in 1977 posits that reducing the duration of each phase of disaster response reduces the duration of the next phase by a factor of 10. Thus, reducing the initial response phase by just 1 day reduces the overall time through the three reconstruction phases to complete recovery by up to 1,000 days (or 3 years). Think of what that means in terms of lives saved, people not incurring additional health problems, and the resilience of the economy. While social scientists argue with the exact numbers and phases in the Haas, Kates, and Bowden work, it still generally works. If the emergency responders finish up life-saving, rescues, and mitigation, faster then recovery groups can enter the affected region and restore utilities, repairs roads, etc.
At the 2011 Tokoku tsunami, we used unmanned marine vehicles to reopen a fishing port that was central to the economy of the prefecture in 4 hours. It was going to take manual divers 2 weeks to manually cover the same area- but more than 6 months before divers could be schedule to do the inspection. That would have caused the fishing cooperatives to miss the salmon season- which was the center of their economy.
At the 2014 Oso Mudslides, we used UAVs to provide ESF3 Public Works with geological and hydrological data in 7 hours that would normally take 2-4 days to get from satellites and was higher resolution than satellites and better angles than what could have been gotten from manned helicopters– if the helicopters could have safely flown those missions. The data was critical in determining how to protect the responders from additional slide and flooding, protecting residents and property who had not (yet) been affected, and mitigating damage to salmon fishing.
2. Land, sea, and aerial robots have already been used since 2001. By my count, unmanned systems have been used in at least 47 disasters in 15 countries. It’s not like these are new technologies. Sure, they are imperfect (I’ve never met a robot that I couldn’t think of something that would make it better), but they are not going to get any better for emergency response until professionals have them and can use them- there has to be a market and then market pressure/competition can drive useful advances!
3. Robots can prevent disasters. Consider this: The US has an aging infrastructure. According to the American Society of Civil Engineers, 1 in 9 bridges are structurally deficient. And transportation departments don’t have cost-effective ways to inspect bridges, especially the underwater portions. Remember the I-35W bridge collapse in Minnesota? Minnesota apparently is one of the top states in keeping up with bridge inspections but only 85% of their bridges are compliant with Federal standards on inspection. That doesn’t bode well for them- or the rest of us. We need unmanned systems that can work cost-effectively, 24/7 and in places inspectors can’t go, or can’t go quickly or safely enough.
(Note: there is lots of great work going on worldwide and we look forward to working with all companies and researchers to help improve this vital technology)
Researchers at Texas A&M and University of Nebraska-Lincoln have found that popular software packages for creating photo mosaics of disasters from imagery taken by small unmanned aerial systems (UAS) may contain anomalies that prevent its use for reliably determining if a building is safe to enter or estimating the cost of damage. Small unmanned aerial systems can enable responders to collect imagery faster, cheaper, and at higher resolutions than from satellites or manned aircraft. While software packages are continuously improving, users need to be aware that current versions may not produce reliable results for all situations. The report is a step towards understanding the value of small unmanned aerial systems during the time- and resource-critical initial response phase.
“In general, responders and agencies are using a wide variety of general purpose small UAS such as fixed-wings or
quadrotors and then running the images through the software to get high resolution mosaics of the area. But the current state of the software suggests that they may not get always the reliability that they expect or need,” said Dr. Robin Murphy, director of the director of the Texas A&M Engineering Experiment Station Center for Robot-Assisted Search and Rescue and the research supervisor of the study. “The alternative is to purchase small UAVs explicitly designed for photogrammetric data collection, which means agencies might have to buy a second general purpose UAV to handle the other missions. We’d like to encourage photogrammetric software development to continue to make advances in working with any set of geo-tagged images and being easier to tune and configure.”
In a late breaking report (see SSRR2015 LBR sarmiento duncan murphy ) released at the 13th annual IEEE International Symposium on Safety Security and Rescue Robotics held at Purdue University, researchers presented results showing that two different photogrammetric packages produced an average of 36 anomalies, or errors, per flight. The researchers identified four types of anomalies impacting damage assessment and structural inspection in general. Until this study, it does not appear that glitches or anomalies had been systematically characterized or discussed in terms of the impact on decision-making for disasters. The team of researchers consisted of Traci Sarmiento, a PhD student at Texas A&M, Dr. Brittany Duncan an assistant professor at University of Nebraska- Lincoln who participated while a PhD student at Texas A&M, and Dr. Murphy.
The team applied two packages, Agisoft Photoscan, a standard industrial system, and Microsoft ICE, a popular free software package to the same set of imagery. Both software packages combine hundreds of images into a single high resolution image. They have been used for precision agriculture, pipeline inspection, and amateur photography and are now beginning to be used for structural inspection and disaster damage assessment. The dimensions and distances between objects in the image can be accurately measured within 4cm. However, the objects themselves may have glitches or anomalies. created through the reconstruction process, making it difficult to tell if the object is seriously damaged.
The researchers collected images using an AirRobot 180, a quadrotor used by the US and Germany military, flying over seven disaster props representing different types of building collapses, a train derailment, and rubble at the Texas A&M Engineering Extension Service’s Disaster City®. The team flew five flights over 6 months. The resulting images for each of the five flights were processed with both packages, then inspected for anomalies using the four categories.
(Note: there is lots of great work going on worldwide and we look forward to working with all companies and researchers to help improve this vital technology)
Researchers at Texas A&M and University of Nebraska-Lincoln have found that popular software packages for creating photo mosaics of disasters from imagery taken by small unmanned aerial systems (UAS) may contain anomalies that prevent its use for reliably determining if a building is safe to enter or estimating the cost of damage. Small unmanned aerial systems can enable responders to collect imagery faster, cheaper, and at higher resolutions than from satellites or manned aircraft. While software packages are continuously improving, users need to be aware that current versions may not produce reliable results for all situations. The report is a step towards understanding the value of small unmanned aerial systems during the time- and resource-critical initial response phase.
“In general, responders and agencies are using a wide variety of general purpose small UAS such as fixed-wings or
quadrotors and then running the images through the software to get high resolution mosaics of the area. But the current state of the software suggests that they may not get always the reliability that they expect or need,” said Dr. Robin Murphy, director of the director of the Texas A&M Engineering Experiment Station Center for Robot-Assisted Search and Rescue and the research supervisor of the study. “The alternative is to purchase small UAVs explicitly designed for photogrammetric data collection, which means agencies might have to buy a second general purpose UAV to handle the other missions. We’d like to encourage photogrammetric software development to continue to make advances in working with any set of geo-tagged images and being easier to tune and configure.”
In a late breaking report (see SSRR2015 LBR sarmiento duncan murphy ) released at the 13th annual IEEE International Symposium on Safety Security and Rescue Robotics held at Purdue University, researchers presented results showing that two different photogrammetric packages produced an average of 36 anomalies, or errors, per flight. The researchers identified four types of anomalies impacting damage assessment and structural inspection in general. Until this study, it does not appear that glitches or anomalies had been systematically characterized or discussed in terms of the impact on decision-making for disasters. The team of researchers consisted of Traci Sarmiento, a PhD student at Texas A&M, Dr. Brittany Duncan an assistant professor at University of Nebraska- Lincoln who participated while a PhD student at Texas A&M, and Dr. Murphy.
The team applied two packages, Agisoft Photoscan, a standard industrial system, and Microsoft ICE, a popular free software package to the same set of imagery. Both software packages combine hundreds of images into a single high resolution image. They have been used for precision agriculture, pipeline inspection, and amateur photography and are now beginning to be used for structural inspection and disaster damage assessment. The dimensions and distances between objects in the image can be accurately measured within 4cm. However, the objects themselves may have glitches or anomalies. created through the reconstruction process, making it difficult to tell if the object is seriously damaged.
The researchers collected images using an AirRobot 180, a quadrotor used by the US and Germany military, flying over seven disaster props representing different types of building collapses, a train derailment, and rubble at the Texas A&M Engineering Extension Service’s Disaster City®. The team flew five flights over 6 months. The resulting images for each of the five flights were processed with both packages, then inspected for anomalies using the four categories.