-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Monetizing digital asset management: The power of metadata management

Article Featured Image

Unsupervised learning

Unsupervised learning techniques are appropriate for certain metadata classifications that inform data cataloging. A benefit of this machine learning variety is that it requires less annotated training data than supervised learning techniques do. Specific unsupervised learning algorithms specialize in detecting similarities between data elements—and their supporting metadata—so that “when you find alike types of information, you can actually use it to find similarities,” Swamy Viswanathan, EVP and CPO at ASG Technologies, commented. “That’s how you can actually classify things together.”

Clustering is another unsupervised learning technique useful for classifying data elements. With these techniques, the models learn from humans, who must approve their results.

Supervised learning

Machine learning’s automation is pivotal for reducing the time and costs of tagging metadata. It’s considered one of the greatest boons for digital content management. “Artificial intelligence can auto-tag digital assets such as images, videos, and documents with high confidence to identify subjects and other important attributes such as products, people, objects, emotions, locations, colors, actions, and scenes,” said Sedegah. This degree of specificity is largely provided by the labeled training data of supervised learning.

“When you want to start labeling things and say, ‘Find me things with exactly this sort of a characteristic,’ that’s actually based on tagging,” Viswanathan explained. “At that point, humans have to define metadata, based on which, the algorithm will then learn.” Metadata is imperative to tagging all of the different points of variation of content for cataloging. Those tags become the foundation for the rapidity with which organizations can intelligently traverse their digital assets to determine their applicability for considerations such as usage rights, expiration dates, or placements.

Taxonomies

The diversity of content relevant to digital assets, which increasingly includes videos and images alongside documents, requires machine learning-based AI manifestations such as image recognition and computer vision. But, regardless of whether the content involves images or text, the specific words used to describe that content are of the utmost importance for tagging metadata. It is vital to standardize the terms used to describe such metadata (and content) with taxonomies, so organizations have the horizontal means of ascertaining “the type of products people talk about and the type of use cases that they talk about,” Aasman said.

Thus, when analyzing content effectiveness via analytics, users can rely on these hierarchies of business glossary terms for identifiable metrics. Aasman also said that, for DAM, it’s crucial to not only deploy uniform vocabularies for the content’s metadata, but also for the content itself so that “you can derive information from the assets by using NLP [natural language processing] or machine learning or visual recognition.” Metadata may provide the launching point for employing taxonomies for cataloging, but, to be thorough, digital content users must also mine the content itself for the most detailed descriptions of those assets.

Natural language

Natural language processing positively affects digital content management in a number of ways, including the following:

Entity extraction: Natural language processing is instrumental in parsing content and extracting germane entities from it for unequivocal understanding of that content—beyond its metadata. In some instances, this technology is paired with image recognition systems to know where to find the meaningful entities in the content. This process involves entity extractors and classifiers “that you would have to train to recognize the [desired content],” Viswanathan explained.

Speech: Natural language processing and machine learning are cornerstones of speech recognition systems. Competitive digital content management vendors can use speech-to-text capabilities and natural language processing for powering virtual assistants to recognize what is being asked and retrieve what is required from the DAM, Schweer revealed.

Search: Search is the most profound output for automatically tagging content with cognitive computing and an example of operationalizing metadata. “For DAM to be usable, you need any given user, from the intern to the CEO, to be able to log into the DAM and easily find the logo asset that they need or figure out what last year’s spring campaign for M&M’s looked like,” O’Brien said. “Having that metadata foundation for the search and the filtering, and to drive all of the important processes that the DAM is performing, is really important.”

These natural language capabilities are particularly influential for achieving these advantages at scale, which could easily encompass “thousands or millions of images, videos, or documents spread among a large organization,” Sedegah said. The temporal benefits of this approach are magnified when paired with other cognitive computing technologies supporting visual search, that can enable end users to surface “images similar to a specific asset within seconds, such as a person with shopping bags in Times Square,” Sedegah noted. Verbal search capabilities are also emerging.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues