Video as Data: What It Means


Treating video as data is a core principle – maybe the most important core principle – in the LenelS2 video program

John L MossBy John L. Moss

John L. Moss is Chief Product Officer of LenelS2


Treating video as data is a core principle – maybe the most important core principle – in the LenelS2 video program. It represents a revolution in security video. But, what does it really mean?

Certainly “video as data” speaks to the form in which visual content is acquired, transported and stored. Large scale, commercially popular digital video came to the market in the early 2000s. It was hampered at the time by the slowness of available data rates (when 56KB was the rage) and the largeness of digital video streams (think MJPEG). Twenty years later, digital video leverages vastly faster networks, processors and storage arrays to deliver real-time imagery that is higher in quality than the best that analog video produced in its day.

Video as data begins with digitized visual content and takes it to the next level by adding two important characteristics of data: the ability to infer meaning through analysis; and, the ability to forensically search and retrieve using database-style queries. These benefits are achieved by storing analytic results in a database that parallels the raw video data with analytic metadata. The bonding of digitized visual content and descriptive metadata is the essence of “video as data,” and we call that combination an “enriched video stream.”

Video content originates at a camera, but metadata originates wherever (camera, server or client) it’s calculated. Often, video analytics are calculated by the firmware in the cameras themselves. Most camera manufacturers whose products are used with LenelS2 systems can perform in-camera analysis of some type. Each frame of video is analyzed on the camera’s processor and the resultant metadata is forwarded to the recorder. The complexity of these analytics ranges from straightforward motion detection to more complex behavior, such as loitering or line-crossing.

LenelS2’s Magic Monitor product adds the ability to display video analytic results outside of the camera for any video source by taking advantage of GPUs, fast CPUs and neural network software. Newer versions of our VRx product offer a similar capability in the server, letting you generate analytic metadata even after the video arrives from the camera without it, thus making any video source “smart.”

Magic Monitor offers a capability called SmartCell that performs object analytics on raw video prior to display, letting users build cells that are content aware. This works for any camera from any video source that Magic Monitor accepts.

The objective of generating and cataloging video analytic metadata is to alert based on it and to search based on its characteristics. The Magic Monitor Forensics capability supports the enriched video streams that come from VRx, and tags the timeline as appropriate.

Digital video, once recorded, is unchangeable. It has to be that way in order to be forensically accurate. Metadata, on the other hand, can be augmented when analyses are subsequently performed. We’re working on support for post hoc analytics that will let users perform analytics on selected video after it’s recorded.

We’ve designed the metadata database to be extended as new analytics become available, so eventually users with enriched video will be able to search using analyses that didn’t even exist at the time the video was recorded.

We are in the earliest phases of video as data, and the future offers some very exciting possibilities.

Read more from this issue of Connect Magazine