People save lives, not technology.
It is one of my favorite expressions. It is not a question open to interpretation, it is a fact. Futurist challenge this statement. 10 years from now, Artificial Intelligence (AI) could be an effective tool as a crisis unfolds or during post event disaster response. Experimental applications are already being tested in field experiments. Commercial applications used by Google Search and Facebook are arguably primitive forms of AI in their automated display advertising software that attempts to accurately target specific advertisements based on a users preferences using archived metadata and profile information. Robots and drones are powerful technologies that can save time and deliver response options not available in the past. We can design and build just about any kind of sensor you can dream up, to detect with pinpoint accuracy and be faster than any first responder is capable of, to find injured people are. We can simulate and model almost every type of natural disaster, anywhere in the world. We are entering an era many consider to be the third industrial revolution, driven by technology consisting of powerful computers, sensors and software. Full integration is the next step and it is happening - right now. But I am going to raise my hand and ask one of the most difficult questions on the planet; who and what is in control?
A disaster within a disaster unfolds.
A fleet of UAV's and Tracked Robot vehicles enter a disaster zone. Each capable and assigned unique tasks and operational roles. Overhead, the UAV's are scanning the earth surface for life signs, density of damage, infrastructure collapse. On the ground, data is relayed from the UAV's to hundreds of robots and unmanned machines preparing to execute software routines programmed into their memory banks to being specific tasks. The machines do not have to stop or take breaks to clear away rubble, rescue people and tend to the injured. They are overwhelmingly superior to human capabilities. The only limitation is the number of UAV's and Robotic machines available to deploy.
It starts with a survey from the air, collecting all the data necessary to prepare the operational plan. The metadata is collected, analyzed with advanced algorithm software, computing the best operational plan to implement based on parameters input from its designers. The machines begin to carry out there tasks. Sensor signal quality at times is fragmented. It tries to re-calibrate. No change is found and moves on to its next programmed task.
The survey results are in. X many are saved and Y are left where they are, as low survival probability to assign resources for rescue and recovery. The results do not stop there, data is compiled and displayed, as to how many roads are repaired and which ones are next. False - Positive and unknown data points are cataloged as 'under review' or corrupt data points. Information is automatically processed and flashed across social media, illustrating what water mains are restored, supplies marshaled and delivered ...but not for every location or region. Why, because they were analyzed non-essential or not programmed to do so.
UAV's and robots hailed as hero's, limiting casualties
A disaster is predicted to unfold. Each parameter leading to this conclusion has been computed and analyzed by the best scientific minds reviewing data from Satellites, UAV's and a plethora of sensors. Crisis and Disaster planners and response agencies know what is coming. It will be impossible to evacuate everyone from the zone. Every one that does evacuate does so in an orderly fashion. Those that are still in the danger region are given information on how to prepare for what is coming. The population is warned, nothing is guaranteed and nor can it be promised. Recovery resources are preposition, ready to execute orders. All it will take is a touch of a virtual button on a smart phone and operations will commence.
Data streams across displays, coordinated in real time to stakeholders across multiple domains and expertise. UAV's and Robots are custom programmed to record and deliver results back to the Operation Center for evaluation and orientation to determine how best act after the disaster strikes. There is no shortage of equipment. Data pulled from the memory banks of City Hall, Regional Government and Federal data centers have been plugged in, networked and fully integrated, creating a complete picture of how the disaster will impact the population, critical infrastructure and hazards likely to be encountered. Open Data standards have delivered a complete picture of the challenges they are likely to face. The computers have determined the amount of equipment needed for this event. It is reviewed by senior disaster leaders and approvals signed off. Standing in the wings are thousands of response teams, prepared to carry out their roles.
Survey assessments from the disaster become clear as UAV's send back sensor metadata back to the forward control room. What was expected has occurred. Time to deploy the machines and humans into the effected region. Work begins, combing block by block, programmed calibrations in each piece of equipment analyze and recognize what to look for. Signals resolution varies creating unidentifiable results. A decision has to be made, direct human intervention is ordered. This cycle is carried out hour by hour, day by day. Sensor data acts as a guide and not as an output of a zero or one, or automated to self determine whether further action is required or not. The cycle is repeated until the emergency is over. Casualties will occur, but not without first verifying that they could not be rescued and repeated overflights and human ground reconnaissance confirms that the mission is completed.
Industrial technology revolution
Mankind's first significant investment in multi-role UAV - robotics and computer / software on a grand scale begins in 1959, with the successful launch of the U.S.S.R's Luna 2 spacecraft landing on the moon. Our need to explore, research and understand the Moon's environment for the purpose of human exploration ignited scientific discovery as an important policy initiative. Very few people understood the long term influence these programs would have. Evolution in robotics, computers and software literately skyrocketed R&D in every direction. Today, platforms are more powerful, smaller and sophisticated than ever believed possible. Today's modern Unmanned Aerial Vehicle (UAV) has more power and capability than all the investments in spaceship technology's between 1959 and 1979 - combined. The amount of software code built into Honda's Advanced Step in Innovation Mobility robot, ASIMO is a closely held secret. I would be willing to bet that ASIMO level of sophistication is equal to that of the International Space Station. Both began operations and testing in the year 2000 one month apart (October and November respectively) !
In between these platforms are specialized sensors and analytic software programs. Today, we can collect, distribute, compile data points using microchip processors and solid state disk drives that can hold data of an entire regions infrastructure the size of New York State on a laptop. We can distribute this data, working with sensors and load them into a UAV the size of a adult bicycle. We can track, monitor and record changes in environmental conditions, human behavior, and dynamic changes as they occur in real time. Technology engineers have developed systems that can collect data faster than humans can absorb by any factor imaginable. By solving analytically demanding environments, it has fueled research in AI. Sensor outputs can be quickly transmitted and computed, delivering results in any model desired. Advancements in this field have been dramatic. Drones can fly precise aerial routes, hover at specific points in the air, sweep a region with multiple sensors, recording results quickly for redistribution and micro analysis. Ground based robots are only limited by the scale and size required, for they can be equipped and powered to any specification or requirement dreamed up.
Advances in technology are driving new designs with performance capabilities that can solve a variety of missions needs. Every conceivable task can be automated including medical, chemical & biological hazards, geological, meteorological conditions, asset management and logistics management. It is probably safe to say, we have not yet found the outer limits on where this ends.
Software programming has reached the level where it can deliver services and answers at incredible speeds. Software programmers have developed tools that can measure and record results of any parameter desired. The ability to model any bearing in a given environment is only limited by scale and performance level desired. Not only is it possible to detect and distinguish different human life signs, but how many, where they are. How much longer will it be before it is possible to determine their condition and life expectancy?
There are countless university students, commercial vendors and government research agencies developing systems in this field. Conclusive evidence suggests advances in robotics, sensors and software are advancing public safety response capabilities, delivering on primary tasks, including saving lives. Billions of dollars have been invested with billions more in the near future. Policy surrounding the use and implementation of the technology has taken hold around the world with many questions and conclusions that are not always aligned or mutually agreed upon. We are at the forefront of these issues. Over the last 55 years, we have expanded our knowledge and capabilities. History is not yet a sufficient barometer that can offer conclusive evidence of our ability to manage the ethical issues that need answers.
Learning how to integrate
UAV's and automated drones deliver different mission profile options. Definitions are being misused as are their capabilities and performance abilities and lines of distinction are becoming blurred in some circles. So too are policy discussions. Several articles showing the value of of UAV's are posted in the Crisis and Disaster Management Magazine. It is clear UAV's and robots can safe lives. They are valuable observation tools for first responders and civil service engineers that help determine response options that can be considered. By the same token, such technology can also lure users into a false sense control in a variety of scenarios and environmental conditions, particularly if improperly trained or unqualified.
Advances in technology are being designed individually and collectively. Analytics and automation are two important areas of focus. Creators are not necessarily swayed by those directly outside their own domains. This is beginning to change, as suggested by Sir Tim Berners-Lee, investigating fusion points, trying to assess gaps and potential hazards in a shared (Open Data) model environment. And like those space pioneers of the 1950's, for every problem uncovered, solutions are at hand through research and development and experimental usage in the field. Humanitarian organizations are uncovering real problems such as privacy, moral standards and performance standards expected or limited. The Humanitarian eXchange Language (HXL) is an example of this potential. Human intervention and oversight is coming into focus. Different domains are beginning to intersect, debating the impacts of AI programs. This debate is real, from medical ethics to rescue priority standards and guidelines. There are at least a dozen more volatile questions, not yet answered from, privacy, data usage, post disaster information use, who does and doesn't have access, Open Data standards, statistical verification, accuracy, security regulations, etc.. Organizations are asking the tough questions and hammering away at eliminating the constant cycle of observing and re-observing limitations discovered in operations. There is a real danger in attempting to program developed answers directly into a computer, allowing it to execute mission profiles based on these inputs. There is evidence that this is being implemented in the insurance and pharmaceutical industry, raising a number of ethics questions.
Real world scenario that could gone wrong.
A disaster is forecast to strike. In collaboration by the City's Parking enforcement agency, which has every license plate in its memory banks. The data is linked to Disaster UAV's image sensor, that can detect and identify each plate. The plan is to use the UAV to fly over the zone after the event calms down. A family has two cars and decides to only take one to evacuate, leaving the other one behind. The family checks in with Emergency Management's mandatory evacuation order check list ticking off a check mark sending a message they are in compliance.
The disaster hits. Those that did not leave in time try to evacuate at the last minute. The neighbor of the family that had already left, has a spare set of keys to the second car. They make it half way out of the evacuation zone when the car is hit by debris injuring its occupants. The UAV spots the vehicle, identifies it as the family's second vehicle, declares it post event debris, because the family has already evacuated the area. This is one of the more simpler scenarios requiring enhanced analysis.
No less important are the implications of NOT having technology available when other communities do. Fear, Uncertainty (or unknown), and Doubt (FUD) in these equations, charge each stakeholder's with tasks that require rigorous responsibility, ethics, and policy review.
It is often said that our evolution cannot be halted. Perhaps this is so. What no one is yet asking is, are we on a mutually agreeable path. I would argue that the more we automate our analysis and interpretation of data collected by sensors, computed by algorithms, could prove ill advised. Humans must be in the decision and analysis loop before actions are considered and carried out. Right now, it is possible to configure, compute and deliver models that can predict outcomes using AI technology, delivered by new applications and platforms without any inputs from humans from the moment it is turned on.
Complicating matters is the contemplation of artificial 'human' decision points. This refers to design parameters contemplated at every stage of development, built in levels referred to as fail safe rules. Decisions cannot be executed without human oversight. The question consists of the 5 W's (who, what, where, when and why) A scenario where data is trusted to be accurate, and the person in charge makes a decision solely based on the outputs. Intervention is a human emotion that many argue is our Achilles and should be swept away through the use of AI. Double confirmation (or even triple, quadruple) has been a safety mechanism that has saved our world during three modern day history events, one in 1962 (Cuban Missile Crisis), 1983 (Stanislav Petrov - false alarm of Soviet early warning system) and 1995 (Black Brant X11 Rocket launch). During a disaster, this scenario runs every minute during deployment. How will advanced systems impact the decision making process. What communities will be analyzed as beyond saving at the touch of a tablet screen before and after the event occurs?
Decision support, finding the balance
The human brain has one advantage no computer or software program will ever have - the ability to pause and deliberate what is defined as ethical. We have our flaws to be sure, but there can be no doubt, when faced with complex problems, individuals and collective thinking can address and resolve issues. Our ultimate trump card is the ability to reason and pause, something a computer or software cannot do consistently every time during every possible event, regardless how perfect its creators may argue. We do not know yet, are potential tipping points suggesting it be attempted.
Advanced warning and detection technology is improving and accelerating rapidly. There are no time limits or constraints creating next generation applications. Beta to mass production time frames are shrinking. Robots, UAV's, processing power,data, and artificial intelligence are melding in new and innovative methods. In the two hypothetical scenarios illustrated above, the results are not really different with the exception one has human check and balances while other does not. There are UAV's (and satellite / sensor) technologies that do not fall immediately under intense scrutiny or criticism. Examples include sensors used in geomatics, archaeology, seismology, and atmospheric research that are considered benign, non-intelligent devices because they are designed for a narrowly defined set of measurements. But the door rips wide open as soon as prediction models enter the equation. Intervention has a whole new meaning under this light because on occasion it defies logic. Dr. McCoy is probably grinning from ear to ear. It can be difficult to understand the rules some scientists, engineers and programmers discuss and how they have come to their conclusions, in order to integrate multiple sensors outputs into theoretical models.
The good new is that by doing so, many successful outcomes are being achieved. But not without some setbacks and conflicts. Hurricane Sandy was the first real illustration where three competing initial prediction models were published. Two models survived as it moved up the east coast of Florida eliminating the Gulf of Mexico version. At that point, the two remaining theoretical models had very different analysis and conclusions, one British based, the other U.S. The American one suggested it would follow the traditional hurricane path and head back out to sea into the middle of the Atlantic and bleed off and die, while the British one predicted it would parallel the eastern seaboard, gain strength and head inland near New Jersey and New York as a Category 5 storm. We all know what happened next. (You can watch the documentary linked in our Flipboard Crisis and Disaster Management magazine.) The lesson learned here is not to rely upon a single group, but multiple entities. These observations and lessons should be applied elsewhere. Just because a computer may predict or state someone has evacuated doesn't mean others have.
Right now, Satellite, Drone, UAV and terrestrial robots loaded with sensors deployed in disaster zones by Emergency Management organizations is in the development stage, circa 1963, but will accelerate faster than the Starship Enterprise. Some technologies are farther ahead than others but not yet at an integrated level.
To support current and future demands in the field, technology requires an adjustable foundation, capable of supporting dynamic outputs and inputs. Mistakes and controversy will occur over the next several years. Policy will under go extensive review and experimental rules will go into effect. R&D will expose vulnerabilities and gaps. Next Gen technology will be levered. It will require a fulcrum point capable of being positioned dynamically, supporting multiple environment variables, yet burdened with almost impossible demands requiring equalized actions. If moved prematurely into service, imbalance could occur.