Welcome to the Center for Robot-Assisted Search and Rescue (CRASAR) at Texas A&M University

CRASAR is a Texas A&M Engineering Experiment Station Center whose mission is to improve disaster preparedness, prevention, response, and recovery through the development and adoption of robots and related technologies. Its goal is to create a “community of practice” throughout the world for rescue robots that motivates fundamental research, supports technology transfer, and educates students, response professionals, and the public. CRASAR is a dynamic mix of university researchers, industry, and responders.

CRASAR has participated in 15 of the 35 documented deployments of disaster robots throughout the world and have formally analyzed 9 others, providing a comprehensive archive of rescue robots in practice. Our industry partners and funding agencies make a wide range of small land, sea, and air robots available for use by responders at no charge through the Roboticists Without Borders program. Our human-robot crew organization and protocols developed first for UGVs, where studies show a 9 times increase in team performance, and then extended for small UAVs during our flights at Hurricane Katrina has been adopted by Italian and German UAV response teams and was used by the Westinghouse team for the use of the Honeywell T-Hawk at the Fukushima nuclear accident.

CRASAR helps organize and sponsor conferences such as the annual IEEE Safety Security Rescue Robotics conference and workshops such as the recent NSF-JST-NIST Workshop on Rescue Robots.

A good overview of rescue robotics is in Disaster Robotics by Robin Murphy (MIT Press, Amazon, and Kindle) and  Chapter 50 of the award-winning Handbook of Robotics. Here’s a list of all known robot deployments: Table of Responses.

Fun facts from “Disaster Robots”:

- All ground, aerial, and marine robots have been teleoperated (like the Mars Rovers) rather than fully autonomous (like a Roomba), primarily because the robots allow the responders to look and act in real-time; there’s always something they need to see or do immediately

- Robots have been at at least 35 events, and actually used at at least 29 (sometimes the robot is too big or not intrinsically safe)

- The biggest technical barrier is the human-robot interaction. Over 50% of the failures (a total of 27 at 13 incidents) have been human error.

- Robots are not used until an average of 6.5 days after a disaster; either an agency has a robot and they use it within 0.5 days or they don’t and it takes 7.5 days to realize a robot would be of use and get it on site

Click here for more information about CRASAR and its activities.

Donate online to CRASAR to support deployments of Roboticists Without Borders!

Recent News From Our Blog

Flood of data may be the biggest problem in dealing with floods

We just concluded a three day Summer Institute on Floods (July 26-28, 2015), which was hosted by our “mother” center, the Center for Emergency Informatics. The Summer Institute focuses on the data-to-decision problems for a particular type of disaster. This year’s Summer Institute was the second in two-part series on floods- even though there was a drought last year, our TEEX partners were very worried about flooding for Texas and flooding is the number one disaster in the world. It was originally scheduled for early June but had to be moved due to the floods and allowed us the opportunity to discuss topics while they were still fresh on everyone’s minds.

 

The Summer Institute brought together representatives from 12 agencies, 15 universities, and 5 companies for two days of “this is what we did during the Texas floods” and one day of “this is what we could do or do better” experimentation with small unmanned aerial vehicles, crowd sourcing, computer vision, map-based visualization packages, and mobile phone apps for common operating pictures and data collection. The field exercise was designed to track the resource allocation of field teams, how they can acquire data from the field from UAVs and smartphones, then how the data can be effectively organized, analyzed, and visualized.

Here are my preliminary findings:

  • Agencies are adopting small UAVs, social media, and smartphones apps to acquire new types of data, faster and over larger geospatial and temporal scales than ever. Small UAVs were used during the Texas floods in all of the affected regions (big shout out to Lone Star UAS Center- we flew under their direction during the floods). Agencies were monitoring social media, in one case to uncover and combat a rumor that a levee had broken. Texas Task Force 1 and other agencies used General Dynamics’ GeoSuite mobile app for giving tactical responders a common operating picture (big shout out- GeoSuite was introduced and refined starting with the 2011 Summer Institute).
  •  Wireless communications is no longer the most visible barrier to acquiring data from the field in order to make better decisions.  While responders in the field may not have as much bandwidth as they did before a disaster, cell towers on wheels are being rapidly deployed and the Civil Air Patrol flew repeater  nodes. That said, wireless communications isn’t solved as hilly geography can prevent teams from connecting to sparse temporary nodes. Plus keep in mind that large parts of rural Texas have limited or no connectivity under normal conditions.
  • The new barrier is what to do with the data coming in from unmanned systems, news feeds, social media feeds, and smart phones. Consider that a single 20-minute UAV flight produced roughly over 800 images totaling 1.7GB. There were over a dozen platforms flying daily for two weeks as well as Civil Air Patrol and satellite imagery. Most of the imagery was being used to search for missing persons, which means each image has to be inspected manually by at least (preferably more). Signs of missing persons are hard to see, as there may be only a few pixels of clothing (victims may be covered in mud or obscured by vegetation and debris) or urban debris (as in, if you see parts of a house, there may be the occupant of the house somewhere in the image). Given the multiple agencies and tools, it was hard to pinpoint what data has been collected when (i.e., spatial and temporal complexity) and then access the data by area or time. Essentially no one knew what they had.  Agencies and insurance companies had to manually sort through news feeds and public postings, both text and images, to find nuggets of relevant information.
  • The future of disasters was clearly in organizing, analyzing, and visualizing the data. Responders flocked to SituMap, an interactive map-based visualization tool that Dr. Rick Smith at TAMU Corpus Christi started developing after the 2010 Summer Institute and is now being purchased by TEEX and other responders. The agencies awarded $900 in prizes to NSF Research Experiences for Undergraduate students for software that classified imagery and displayed where it was taken won 1st, 2nd, and 3rd. Multiple agencies requested those apps be hardened and released as well as the Skywriter sketch and point interface (CRASAR developed that for UAVs and it is being commercialized) and the wide area search planning app developed over the last two summers by other students from the NSF REU program. In previous years, the panels have awarded prizes primarily to hardware- UAVs, battery systems, communications nodes, etc. This year, the attitude was “those technologies are available, now help us use the data they generate?”

Each year we hear the cry from emergency management “we’re drowning in data, help” and this year it was more than a bad pun.

 

More entries from our blog
Subscribe to our RSS Feed