-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Tags, AI, and dimensions

Article Featured Image

Remember tags? Around 2007, I was all over them, and I feel no shame about that. Well, not much shame. [David’s book, Everything Is Miscellaneous, was published in 2007. —Ed.] Letting users apply whatever tags, or folksonomies, they wanted to digital content blew apart constraints on knowledge that we'd assumed for millennia were strengths of knowledge. In fact, the idea that each thing had only one “real” tag was the bedrock of knowledge for thousands of years: A tomato is a vegetable, not some other thing.

Ok, nerds, you’re right; a tomato is actually a berry. But you’re just proving my point: We like to think that a thing is one thing and not any another. At least in some contexts.

Of course, before tags, we would apply multiple classifications to things: A book about tomatoes might get classified under recipes, healthy foods, and the genus Solanum. But a tomato is also a classically red object, roundish, delicious, squishy, a source of juice, a bad thing to learn juggling with, something we used to throw at bad actors and corrupt politicians, and so much more.

Then, with sites that allowed userbased tagging, users could tag tomatoes with whatever attributes were important to the user at that time. We can now do this with the photos we take, the places we go on our maps, the applications we use, the sites we visit, the music we listen to. Tags have become so common that they’ve faded from consciousness since 2007, although sometimes a clever hashtag pops up.

While AI in the form of machine learning can automatically apply tags, it may reduce the need for tags. Already we can search for photos based on their content, colors, or even their mood and all without anyone attaching tags to them.

Machine learning redefines tagging

But more may be at stake. Might machine learning complete the conceptual job that tagging began, leading us from a definitional understanding of what things are to a highly relational view? My prediction (My motto: “Someday I’ll get one right!”) is that within the next few years, “dimensionality” is going to become an important, everyday word.

One view of meaning is that a word is what its definition says it is, as if a definition were the long way of saying what the word says more compactly. But that’s not how we use or hear words. In The Empire Strikes Back, when Princess Leia says, “I love you” to Han Solo and he replies, “I know,” the definitions of those words completely miss what just transpired.

Tagging has made clear that things have very different meanings in different contexts and to different people. Definitions have their uses, but the times when you need a dictionary are the exception. Tags make explicit that what a thing is (or means) is dependent on context and intention.

Machine learning is getting us further accustomed to this idea, and not just for words. For example, a medical diagnostic machine learning model may have been trained on health records that have a wide variety of data in them, such as a patient's heart rate and blood pressure, weight, age, cholesterol level, medicines they’re taking, past history, location, diet, and so forth. The more factors, the more dimensions.

 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues