Archive for June, 2017

Tropical Storm Cindy: Best Practices and Regulations for Agencies Using Small UAS

As Tropical Storm Cindy brings rain and flooding to the Gulf Coast, it’s a good time to share the two “one-pagers” that the Roboticists Without Borders small UAS team members have put together.  We have several flyers on stand-by with small UAS ready to deploy upon request.

Here are two one-pagers that may be of use to responders:

http://tinyurl.com/crasar-1page-SUAS-regs  is a one-page guide to who can fly for an agency and where they can fly (and how they can easily tell using airmap.io where they are allowed to fly). This is aimed for emergency managers who haven’t worked with SUAS before and have heard the FAA regulations are daunting- but the rules are much simpler and permissions can be gotten within 1-2 hours if needed.

http://tinyurl.com/crasar-1page-SUAS-floods is a one-page compendium of the missions that SUAS can be used for before, during, and after the flooding. It also discusses imagery post-processing and the other important considerations (coverage, manpower, software, data management). This is based on our experiences with flooding and storm surges since 2005 and especially reinforced by our recent deployments with Fort Bend County, Texas, and with Tangipahoa and Washington Parishes, Louisiana, last year. A preprint of our 2016 paper detailing the case studies of the Fort Bend County floods is here (IEEE SSRR 2016) and the official paper is here.

We hope for the best for everyone in the path of the storm.

Murphy one of four breakthrough women

in IEEE Institute magazine.

My summary of the AI for Good Summit overall

File Jun 10, 3 37 33 PMWhat a treat to be a speaker and participant in the  X Prize ITU AI for Good Summit, helping advocate for intelligent unmanned systems and emergency informatics. I had feared that the summit would merely recycle the usual memes about AI but  it nimbly avoided that pitfall by bringing together a diverse set of experts in AI and ethics from academia and industry with dozens of UN agencies who could articulate real needs and valid concerns. I found the talks to be thought-provoking and inspiring, especially ACM President Vicki Hanson reminding us that “the measure of success for AI applications is the value they create for human lives.” There were unexpected twists which I will try to summarize and share here.

 

The summit did indeed touch on the usual memes that are now standard in any discussion of AI for good.  I’ll go through the four most frequent and then get to the unexpected.

 

File Jun 10, 3 38 59 PMThere were the frequent obligatory pronouncements of the exponential versus linear nature of computing and all things digital (though Gary Marcus’ contrarian ELIZA- Siri linear line provided a delightful counterpoint and Marcus Shingles gave a business-oriented discussion of the linear-exponential trope, endowing it with actionable relevance).

 

There was a little bit of the “there is no commercial economic incentive for AI for good,” especially for disasters, refugees, or poverty. However Anja Kaspersen and other boots-on-the-ground agency reps went one step further and pushed back on academia and industry, making a subtle point that there’s no economic incentive to stay with an AI for Good app after it’s been piloted because it is really, really hard to generate sustainable innovation that solves a real problem. And, as noted by a few cynical participants, perhaps after being milked for marketing, there’s no continuing benefit.

 

I had expected more of the “AI is actually increasing inequality rather than helping” meme, particularly in terms of exacerbating the digital divide, reducing jobs (or good paying jobs), and increasing the disparity of wealth.  This was touched on but the meme was  so clearly accepted by all participants that the discussions focused on how to fix it.
The most frequently occurring meme was that there should be a democratization of AI, specifically in terms of access to data, transparency in who owns the data, and how that data is being used, and that the data should be used for the good of all.  This was a near universal topic for talks, discussion questions, and recommendations.

 

Now for the unexpected. For me there were some new terms I hadn’t heard before (such as “Centaur” to refer to human-machine collaboration), some I-should-have-thought-of-this-myself moments (for example, that given that even the poorest people generally now have a smartphone or share one for their village,  AI needs to be aimed for working with that level of platform),  and some be-afraid-be-very-afraid moments (particularly Stuart Russell’s cautionary comments that  misuse of AI is probably the biggest current concern but we could be undergoing a malware revolution that will render everything, including AI, useless).

 

But the most striking insights for me were three analogies that really hit home.

 

35031691452_f68b5ae92e_oPeter Marx, in the lively panel that we were together on, offered a stunning analogy about menus. As Gary Marcus and I complained that too many people had no idea of the breadth of the field of AI and its rich set of techniques, Peter noted that most AI developers were not AI experts but rather picking AI techniques off a menu.  I immediately visualized a person ordering “deep learning” because it had a favorable buzz on Yelp.  The analogy implicitly raises the questions of Which restaurants? Who sets the menu? And what about the food pyramid- what goes with the other choices? What’s a balanced meal or system? What about nutritional food labels- do we know what are the ingredients of this particular AI and that there may be too much sugar and salt? Who is conducting the food and health inspections? The menu analogy also touches on the fundamental question of whether practitioners need to have a formal comprehensive introduction to AI?

 

A more speculative analogy by Robert Kirkpatrick about the potential impact of AI compared AI to atomic energy. Atomic energy is “leaky,” potentially hazardous and thus has to be handled carefully, and can be used for peace or war.  The analogy caused me to immediately think of the nuclear arms race, government regulations, and so on.
The most sobering analogy was Peter Lee’s analogy equating the need to learn AI with the need to learn to read. Not in the Reading Rainbow sense of needing to learn to read, but in the Gutenberg printing press sense of the word. Given that “that there were perhaps 30,000 books in all of Europe before Gutenberg printed his Bible; less than 50 years later, there were as many as 10 to 12 million books,” (from http://www.hrc.utexas.edu/educator/modules/gutenberg/books/legacy/) printing clearly presented a revolution. So what did watching the revolution in printing mean to you as a parent? How were you going to make sure your children could read and take advantage of books? Even though you didn’t necessarily know what books would be written, by whom, or about what. And the implication that AI is similar, what are we doing to educate the next generation about AI? What do they need to know, regardless of what they expect their occupation to be? Especially since AI, like reading, isn’t really limited to a specific profession.  Beyond making sure our families are left behind, how can we ensure that everyone is AI literate? If there was any doubt about the enormity of a severe digital divide, the Gutenberg analogy erases that doubt.

 

Overall, three days well-spent and my hat is off to ITU for hosting it!