Why It is Unethical NOT to Use Robots for Disasters

robot revolutionI was a panelist on The Robot Revolution panel held by the Chicago Council on Global Affairs in conjunction with the Chicago Museum of Science and Industry (and the fantastic exhibit Robot Revolution!) You can see the panel at the YouTube clip here

What struck me was the focus on the ethics of robotics: were they safe? what about weaponization? they are going to take away jobs? Elon Musk and Stephen Hawking say they are dangerous, and so on.

I believe that it is unethical not to use existing (and rapidly improving) unmanned systems for emergency preparedness, response, and recovery. The fear of some futuristic version of artificial intelligence is like having discovered penicillin but saying “wait, what if it mutates and eventually leads to penicillin resistant bacteria a 100 years from now” and then starting an unending set of committees and standards processes. Unmanned systems are tools like fire trucks, ambulances, and safety gear.

Here are just three of the reasons why I believe it is unethical not to use unmanned systems for all aspects of emergency management. These came up when I was being interviewed by TIME Magazine as an Agent of Change for Hyundai (but don’t blame them)!

1. Robots speed up disaster response. The logarithmic heuristic developed by Haas, Kates, and Bowden in 1977 posits that reducing the duration of each phase of disaster response reduces the duration of the next phase by a factor of 10. Thus, reducing the initial response phase by just 1 day reduces the overall time through the three reconstruction phases to complete recovery by up to 1,000 days (or 3 years). Think of what that means in terms of lives saved, people not incurring additional health problems, and the resilience of the economy. While social scientists argue with the exact numbers and phases in the Haas, Kates, and Bowden work, it still generally works. If the emergency responders finish up life-saving, rescues, and mitigation, faster  then recovery groups can enter the affected region and restore utilities, repairs roads, etc.

At the 2011 Tokoku tsunami, we used unmanned marine vehicles to reopen a fishing port that was central to the economy of the prefecture in 4 hours. It was going to take manual divers 2 weeks to manually cover the same area- but more than 6 months before divers could be schedule to do the inspection. That would have caused the fishing cooperatives to miss the salmon season- which was the center of their economy.

At the 2014 Oso Mudslides, we used UAVs to provide ESF3 Public Works with geological and hydrological data in 7 hours that would normally take 2-4 days to get from satellites and was higher resolution than satellites and better angles than what could have been gotten from manned helicopters– if the helicopters could have safely flown those missions. The data was critical in determining how to protect the responders from additional slide and flooding, protecting residents and property who had not (yet) been affected,  and mitigating damage to salmon fishing.

2. Land, sea, and aerial robots have already been used since 2001. By my count, unmanned systems have been used in at least 47 disasters in 15 countries. It’s not like these are new technologies. Sure, they are imperfect (I’ve never met a robot that I couldn’t think of something that would make it better), but they are not going to get any better for emergency response until professionals have them and can use them- there has to be a market and then market pressure/competition can drive useful advances!

3. Robots can prevent disasters. Consider this: The US has an aging infrastructure. According to the American Society of Civil Engineers, 1 in 9 bridges are structurally deficient. And transportation departments don’t have cost-effective ways to inspect bridges, especially the underwater portions. Remember the I-35W bridge collapse in Minnesota? Minnesota apparently is one of the top states in keeping up with bridge inspections but only 85% of their bridges are compliant with Federal standards on inspection. That doesn’t bode well for them- or the rest of us. We need unmanned systems that can work cost-effectively, 24/7 and in places inspectors can’t go, or can’t go quickly or safely enough.

 

 

 

 

 

Common UAV Software May Not (Yet) Be Reliable for Building Safety or Damage Assessment

Reconstruction exhibiting all four types of anomalies
Sample reconstruction exhibiting all four types of anomalies

(Note: there is lots of great work going on worldwide and we look forward to working with all companies and researchers to help improve this vital technology)

Researchers at Texas A&M and University of Nebraska-Lincoln have found that popular software packages for creating photo mosaics of disasters from imagery taken by small unmanned aerial systems (UAS) may contain anomalies that prevent its use for reliably determining if a building is safe to enter or estimating the cost of damage. Small unmanned aerial systems can enable responders to collect imagery faster, cheaper, and at higher resolutions than from satellites or manned aircraft. While software packages are continuously improving, users need to be aware that current versions may not produce reliable results for all situations. The report is a step towards understanding the value of small unmanned aerial systems during the time- and resource-critical initial response phase.

“In general, responders and agencies are using a wide variety of general purpose small UAS such as fixed-wings or

quadrotors and then running the images through the software to get high resolution mosaics of the area. But the current state of the software suggests that they may not get always the reliability that they expect or need,” said Dr. Robin Murphy, director of the director of the Texas A&M Engineering Experiment Station Center for Robot-Assisted Search and Rescue and the research supervisor of the study.  “The alternative is to purchase small UAVs explicitly designed for photogrammetric data collection, which means agencies might have to buy a second general purpose UAV to handle the other missions. We’d like to encourage photogrammetric software development to continue to make advances in working with any set of geo-tagged images and being easier to tune and configure.”

In a late breaking report (see SSRR2015 LBR sarmiento duncan murphy ) released at the 13th annual IEEE International Symposium on Safety Security and Rescue Robotics held at Purdue University, researchers presented results showing that two different photogrammetric packages produced an average of 36 anomalies, or errors, per flight.  The researchers identified four types of anomalies impacting damage assessment and structural inspection in general.  Until this study, it does not appear that glitches or anomalies had been systematically characterized or discussed in terms of the impact on decision-making for disasters.  The team of researchers consisted of Traci Sarmiento, a PhD student at Texas A&M, Dr. Brittany Duncan an assistant professor at University of Nebraska- Lincoln who participated while a PhD student at Texas A&M, and Dr. Murphy.

The team applied two packages, Agisoft Photoscan, a standard industrial system, and Microsoft ICE, a popular free software package to the same set of imagery. Both software packages combine hundreds of images into a single high resolution image. They have been used for precision agriculture, pipeline inspection, and amateur photography and are now beginning to be used for structural inspection and disaster damage assessment. The dimensions and distances between objects in the image can be accurately measured within 4cm.  However, the objects themselves may have glitches or anomalies. created through the reconstruction process, making it difficult to tell if the object is seriously damaged.

The researchers collected images using an AirRobot 180, a quadrotor used by the US and Germany military, flying over seven disaster props representing different types of building collapses, a train derailment, and rubble at the Texas A&M Engineering Extension Service’s Disaster City®. The team flew five flights over 6 months. The resulting images for each of the five flights were processed with both packages, then inspected for anomalies using the four categories.

Common UAV Software May Not (Yet) Be Reliable for Building Safety or Damage Assessment

Reconstruction exhibiting all four types of anomalies
Sample reconstruction exhibiting all four types of anomalies

(Note: there is lots of great work going on worldwide and we look forward to working with all companies and researchers to help improve this vital technology)

Researchers at Texas A&M and University of Nebraska-Lincoln have found that popular software packages for creating photo mosaics of disasters from imagery taken by small unmanned aerial systems (UAS) may contain anomalies that prevent its use for reliably determining if a building is safe to enter or estimating the cost of damage. Small unmanned aerial systems can enable responders to collect imagery faster, cheaper, and at higher resolutions than from satellites or manned aircraft. While software packages are continuously improving, users need to be aware that current versions may not produce reliable results for all situations. The report is a step towards understanding the value of small unmanned aerial systems during the time- and resource-critical initial response phase.

“In general, responders and agencies are using a wide variety of general purpose small UAS such as fixed-wings or

quadrotors and then running the images through the software to get high resolution mosaics of the area. But the current state of the software suggests that they may not get always the reliability that they expect or need,” said Dr. Robin Murphy, director of the director of the Texas A&M Engineering Experiment Station Center for Robot-Assisted Search and Rescue and the research supervisor of the study.  “The alternative is to purchase small UAVs explicitly designed for photogrammetric data collection, which means agencies might have to buy a second general purpose UAV to handle the other missions. We’d like to encourage photogrammetric software development to continue to make advances in working with any set of geo-tagged images and being easier to tune and configure.”

In a late breaking report (see SSRR2015 LBR sarmiento duncan murphy ) released at the 13th annual IEEE International Symposium on Safety Security and Rescue Robotics held at Purdue University, researchers presented results showing that two different photogrammetric packages produced an average of 36 anomalies, or errors, per flight.  The researchers identified four types of anomalies impacting damage assessment and structural inspection in general.  Until this study, it does not appear that glitches or anomalies had been systematically characterized or discussed in terms of the impact on decision-making for disasters.  The team of researchers consisted of Traci Sarmiento, a PhD student at Texas A&M, Dr. Brittany Duncan an assistant professor at University of Nebraska- Lincoln who participated while a PhD student at Texas A&M, and Dr. Murphy.

The team applied two packages, Agisoft Photoscan, a standard industrial system, and Microsoft ICE, a popular free software package to the same set of imagery. Both software packages combine hundreds of images into a single high resolution image. They have been used for precision agriculture, pipeline inspection, and amateur photography and are now beginning to be used for structural inspection and disaster damage assessment. The dimensions and distances between objects in the image can be accurately measured within 4cm.  However, the objects themselves may have glitches or anomalies. created through the reconstruction process, making it difficult to tell if the object is seriously damaged.

The researchers collected images using an AirRobot 180, a quadrotor used by the US and Germany military, flying over seven disaster props representing different types of building collapses, a train derailment, and rubble at the Texas A&M Engineering Extension Service’s Disaster City®. The team flew five flights over 6 months. The resulting images for each of the five flights were processed with both packages, then inspected for anomalies using the four categories.