Orions Systems | IBM Watson’s Trailer Great Example of Algorithms + Human Cognition Approach to Video Intelligence
16876
post-template-default,single,single-post,postid-16876,single-format-standard,ajax_fade,page_not_loaded,,qode-child-theme-ver-1.0.0,qode-theme-ver-10.0,wpb-js-composer js-comp-ver-4.12,vc_responsive
 

IBM Watson’s Trailer Great Example of Algorithms + Human Cognition Approach to Video Intelligence

IBM Watson’s Trailer Great Example of Algorithms + Human Cognition Approach to Video Intelligence

The entertainment and AI worlds collided last week when the first AI-created movie trailer hit the internet. Aptly, the movie trailer was created for Morgan, a movie about an artificially created human.

The AI was supplied by IBM’s AI platform Watson, but much of the creativity behind the trailer, such as feeding Watson scenes to learn from, editing the scenes and arranging them in order with music, was still done very much by humans. It’s a good example of the “Algorithms+Human Cognition” approach that we use at Orions Systems to develop Smart Vision solutions for video content analysis.

Breaking down the workflow we can pinpoint the hand-offs between Watson and the team of creatives, and see how algorithms combined with people power can change and scale the way we analyze video content, and even how we create it.

Step 1: People train the algorithm

IBM’s team of researchers had to train the system on what type of scenes typically show up in a horror-movie trailer. They did this by selecting 100 trailers and then creating video segments that Watson could analyze.

Step 2: Watson runs the algorithms

In this case a number of algorithms were used to detect people, objects, and scenery. Each scene was tagged with an emotion label from 24 potential emotions. Audio analysis was also completed to add to the data about each scene to define it as eerie, frightening, or touching, as well as location, lighting and framing to get an idea of what type of scenes end up in scary movie trailers.

Step 3: Feed in the full movie video

Now that Watson has been trained to know what to look for it can search through the entire movie to find scenes that would fit into a typical horror movie trailer. Watson selected 10 scenes based on the inputs and tagging.

Step 4: Create the final trailer

The ten selected scenes or data subset, now organized and readily accessible, was handed off to a human editor who made the final selection of the scenes, arranged them, added music and title cards.

“It’s important to note that there is no “ground truth” with creative projects like this one. Neither our team, or the Fox team, knew exactly what we were looking for before we started the process.” John Smith, IBM Fellow, Manager Multimedia and Vision

IBM says the entire process took about 24 hours, cutting weeks off the typical trailer creation schedule. And this is where the power in machine learning and human cognition pays off: scale. Algorithms are great at analyzing video and image content very quickly but they need to be trained by humans on what to look for and when the stakes are high humans need to play a role in the final analysis.

“The combination of machine intelligence and human expertise is a powerful one. This research investigation is simply the first of many into what we hope will be a promising area of machine and human creativity. We don’t have the only solution for this challenge, but we’re excited about pushing the possibilities of how AI can augment the expertise and creativity of individuals.” John Smith

Orions Systems creates smart vision solutions that combine algorithms and human cognition to bring new levels of video content intelligence to a range of industries, from sports to the military. For more information on how smart vision systems are created contact us at info@Orionssystems.com.