Orions Systems | Surf the Data Tsunami
post-template-default,single,single-post,postid-17689,single-format-standard,ajax_fade,page_not_loaded,,qode-child-theme-ver-1.0.0,qode-theme-ver-10.0,wpb-js-composer js-comp-ver-4.12,vc_responsive

Surf the Data Tsunami

Surf the Data Tsunami

Co-author Gabe Harris, Orions Systems VP of Product Management, received third place from the US Naval Institute for his essay on developing AI for video intelligence. The essay appeared in Proceeding Magazine, subscribers can access the final essay here.

“Automating the derivation of knowledge from data with AI begins by identifying the problem or the issue to be resolved. The next step is to develop a structure for the data that underlies that knowledge.”

Surf the Data Tsunami

The Navy needs to exploit the great surge of information to replace situational awareness with situational understanding.

By Gabe Harris, Cynthia Lamb, and Jerry Lamb

The Navy is on the verge of having many new or upgraded systems generating vast amounts of data. Sonars, satellite sensors, and radars are getting more prevalent and more efficient every day. Consider the micro-satellites described by Sean Cate and Jesse Sloman in the May 2016 Proceedings.1 When fully operational, these would, in one company’s plans, image every place on earth twice per day and provide 90-second video clips—and this is just one company.2 Cate and Sloman are correct that this will change the nature of naval warfare, but only if the Navy solves the problem of how to use all that data. There is not enough time, and there are not enough eyeballs, to view and compare images of the entire surface of the Earth twice a day. Even the oceans, with their many open areas, would be impossible to search given current methods.

The world creates 2.5 quintillion (1018) bytes of data per day from unstructured data sources like sensors, social media posts, and digital photos. “Unstructured” means that the elements within the data cannot be put into a typical database management system or displayed in titled columns and rows. Big data from video and sensors is a particular challenge.  Consider that three minutes of video data at 1 GB pixel resolution takes up the same storage (64 GB) as roughly 64,000 books. Imagine the amount of data being generated by a single submarine’s sensors every day. Now increase that amount with data from unmanned vehicles and various surveillance systems, and it is like a “data tsunami.”

To make sense of all this video and sensor data today, human analysts have to look at it or use a computer algorithm to decide if it requires further action. Error rates even by experienced operators and the large numbers of automatic system-generated false alarms show that neither approach works effectively. The military needs to combine the two in an innovative, flexible way. Perhaps Watson, IBM’s artificial intelligence (AI) machine famous for winning “Jeopardy!” can provide some clues.

Learning to Surf

Algorithms are great at analyzing video and image content quickly, but they need to be trained by humans on what to look for, and when the stakes are high, humans need to play a role in the final analysis.

The entertainment and AI worlds collided recently when Watson created the first AI-directed movie

IBM’s Watson created the first AI-assisted movie trailer for the film Morgan.

trailer, for the film “Morgan” (an apt choice given its subject is an artificially created human).3 The trailer was a true collaboration between AI and humans. Breaking down the workflow shows the hand offs between Watson and the human creative team and demonstrates how algorithms combined with people power can change the process of video analysis—and even how new things can be created from it. The “Morgan” trailer is a good example of the “algorithms + human cognition” approach needed to ride the tsunami.

The first step requires people to train the algorithms. IBM researchers taught Watson what types of scenes typically show up in a horror-movie trailer by selecting 100 trailers and then choosing video segments the AI could analyze. Several different algorithms detected people, objects, and scenery, and each scene was tagged with a label, such as “eerie,” “frightening,” and so on. Then the researchers gave Watson the full movie from which to select scenes that would fit into a typical horror movie trailer. Watson selected ten based on the inputs and tagging. Finally, a human editor chose a few of Watson’s selections, arranged them, and added music and title cards. The process only took about 24 hours, weeks less than the time typically required.

The human part is easy to comprehend, but the algorithms sometimes seem to be magical. If a simple Facebook algorithm can tag your friends in a photo without human intervention, we should expect the limitless world of computer algorithms to be able to deal with all of the data being generated by today’s tactical systems. Yet for all their seeming omniscience, computers remain nothing more than direction-following machines, and an algorithm is just a series of instructions to them. Even the seemingly autonomous unmanned vehicles the Navy uses simply carry out a series of instructions.

The key is to have the right algorithms. Currently programmers load a blank algorithm with known information, known as “ground truth.” This “blank algorithm” approach is the basis of machine learning—teaching a machine by using ground truth data. It learns and starts to draw conclusions about items, making correlations between things that are similar. This is familiar to anyone who has seen an ecommerce scrollbar saying, “People buying your item also bought . . . .” If someone buys a bicycle, they have a higher-than-average probability of needing a helmet, gloves, reflector vest, etc.  These things are simple to predict. But an algorithm can also tell you when average ibuprofen and bandage consumption increases unpredictably, and not just that it has increased but that it did so by 17 percent. It will learn that a woman in her teens or 20s who is buying bigger clothes and lotions not normally associated with her personal customer profile has a higher probability of being pregnant.4

The more specific the problem-set to which an algorithm is applied, the higher probability it has of success. The trick is to make it as specific, and free of edge-cases (problems that occur only at extrema), as possible. When algorithms go wrong, it can have embarrassing consequences, as Google found out to its dismay when some people were tagged as gorillas in a facial-recognition algorithm, or possibly fatal consequences in a tactical situation such as a missed weapon detection.5 On the other hand, the more ground truth you include in an algorithm, the more complex the algorithm needs to be, requiring substantial processing to produce results.

Another challenge is distinguishing correct results from incorrect ones. This takes additional time and, in most cases, human involvement.

Training each algorithm on one highly specialized toolset specific to a problem requires a large investment but doesn’t produce useful results for a different problem. Every time the problem-sets change, analysts start over again using the same blank algorithm. Applying what was learned during the development of the previous algorithm can shorten the time to build the next one, but each starts with new ground truth. This solution is untenable.

“Human-machine teaming is the only thing that makes a practical solution possible.”

We envision a world of algorithms and microalgorithms that do one small thing, or a series of things, very well, working in concert to help arrive at an answer. There are programmers, researchers, and businesses developing millions of these algorithms; the algorithmic job of the future will be to determine which of the millions of algorithms should be applied to “my” problem. Algorithms take cues from people, and people take over where algorithms leave off, so humans will have to remain in the loop. Human-machine teaming is the only thing that makes a practical solution possible. This is the essence of the “Third Offset Strategy.” Its successful application will depend on combining humans and algorithms in a way that leads to better decisions.

Building the Surfboard

Human-machine teaming is fundamentally different than either designing a better man-machine interface (MMI) or using automation. Just as surfers could not have conquered monster waves without a tow-in and advanced board technology, warfighters need new approaches to overcome the big wave of data challenges. By fundamentally rethinking and redesigning the way warfighters work, they will be able to work seamlessly and collaboratively with technology. The good news is that the technology exists for a new approach to the vast amounts of unstructured data-heavy streams such as turning real-time video into knowledge.

Human operators make sense of their world by collecting data and information, comparing that data to what they already know, making inferences, and drawing conclusions so they can make decisions. Data and information come from many sources and can have different meanings depending on the context or frame of reference. For humans to make a decision when risk is involved, they need to extract the right information and determine its meaning with respect to the particular goal or outcome.  Do the risks outweigh the benefits?  What are some other options?  How can risks be mitigated? How much risk exists?  What information is missing? Should more data be collected? These questions usually play out quickly and unspoken in the minds of the decision makers.

A “friendly” vessel may not be so friendly if it is 440 meters long, weighs 220,000 tons, and is sitting where a submarine wants to surface.  It is obvious to the submarine’s crew that an ultra-large containership represents a real danger in this context.

Automating the derivation of knowledge from data with AI begins by identifying the problem or the issue to be resolved. The next step is to develop a structure for the data that underlies that knowledge.  A structure that allows human knowledge and machine speed and efficiency to be combined is called an “ontology.” It does this by describing the relationship among the data, objects, actions, and other critical elements that drive decision-making.  The very act of performing an ontological analysis clarifies the structure of that knowledge.

As an example of this approach, sonar tactical displays can be analyzed ontologically as a way to detect and classify unknown contacts more quickly and accurately. By treating the tactical system’s outputs (e.g., sonar displays and audio signals) as algorithm inputs, we can take advantage of machine learning to analyze those outputs ontologically.

A sonar operator could highlight a particular acoustic feature present on the sonar display; the algorithm then would compare those features to the ground-truth library to generate a list of possible contacts (along with confidence levels) for presentation to the human. If additional features were to be highlighted, the list and confidence levels would be updated.  The operator would then use intuition and knowledge of the tactical situation to make decisions. This is different from the operator having to keep track of everything and make all the comparisons in his head. Dr. William D’Angelo at the Office of Naval Research is currently funding a project to compare operator performance under both conditions using archived sonar data.

Hanging Ten

The ontological approach has great advantages for both the operator and the system developer. With it, operators will make better decisions. Because this type of system is completely scalable, the ontological structure can handle a great number of different inputs and input types. An analyst could work from a single sensor or a combatant commander’s theater-level inputs, as when the Joint Inter-Agency Task Force searches the Gulf of Mexico for drug runners’ semisubmersible submarines. The approach also avoids the limitations of current datamining techniques applied to traditional database structures.

In addition, analyzing the data in this way indexes the data according to the ontological structure. This allows the data to be searched, played back, and analyzed in ways previously not possible. The decision maker can search the stored data semantically—not merely play it back like an old video cassette—and perform in situ analyses that are not time-feasible in the current systems. This creates the ability to use near-real-time data to modify tactics and increase tactical or strategic advantage. Think of a professional football team analyzing video data during the game to adjust and modify their play-calling— waiting for the next game or season to gain lessons learned and improve is not an option.

Using an ontological framework offers new and creative ways to present the data. Decision makers can better engage with the data and information and gain new insights that reduce the time to make a decision. These combined advantages allow the warfighter to move from situational awareness to situational understanding.

For developers, such a knowledge-based system can be produced as a standalone box that requires only a connection to a video stream from a sensor or output from non-sensor data—e.g., intelligence messages. This separation reduces integration complexities and limits impact on current operations and manpower while it greatly increases capability.

One major problem with current approaches for getting a good common-operating picture is that data from different combat systems varies widely, among not only ship classes but even different flights of the same class, as well as from different services. Attempts to use common data interfaces or structures have not solved the problem. The semantic framework of ontology greatly strengthens system interoperability by allowing non-proprietary sharing of data among previously incompatible systems.

It also addresses the reuse problem of many sets of algorithms. The approach can be applied in different situations by changing the ontology structure when moving from one problem set to another. These advantages allow system architecture to evolve from bolted-together “systems of systems” to a truly integrated worldview.

Developing Situational Understanding

If the Navy does not address the data tsunami, the fleet will be engulfed by it. But by adopting the “algorithms + human cognition” approach, the service will go far beyond handling this burgeoning data problem. It will solve interoperability challenges that have befuddled acquisitions and operational programs for years, and more importantly, will offer commanders an entirely new way of understanding their situations. As radar and sonar brought commanders situational awareness that exceeded what lookouts posted on mastheads could do, ontological sensor analysis will bring about an understanding that will change the way fleets act and fight profoundly.


Gabe Harris is vice president of product management at Orions Systems, Inc., and a 20-year veteran of the U.S. Air Force. Ms. Lamb is a program manager at AECOM. She has worked as a mathematician and project manager at the Naval Undersea Warfare Center and is an adjunct professor at the University of New Haven. Dr. Lamb is a senior lead engineer for Booz Allen Hamilton in Newport, Rhode Island, and was the technical director of the Naval Submarine Medical Research Laboratory from 2002–2016.



  1. Sloman, Jesse, and Cate, Sean. Operating Under Constant Surveillance, U. S. Naval Institute Proceedings, Vol. 142/5/1,359 May 2016
  2. “SkySat 3, 15,” Gunter’s Space Page, http://space.skyrocket.de/doc_sdat/skysat-3.htm
  3. http://www.wired.co.uk/article/ibm-watson-ai-film-trailer
  4. http://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/#58bbede634c6
  5. http://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-black-people-as-gorillas/29567465/