-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

My teammate the bot—really?

Article Featured Image

In a few short years, the workplace is going to be filled with new characters. We have characters now, of course: the endlessly patient IT support person; the egomaniacal, me-first corporate climber; the supportive counterpart across the cubicle wall; the hired gun from the consultancy; the not-my-job killjoy just assigned to the team. But what if all those types are rolled into one and called our “digital assistant?” Or what if all those familiar human characters fade away to be replaced with a virtual silicone resource whose ways are unfamiliar and whose language often seems to be something different from our own?

Prepare for the arrival of devices brought in to “augment” our capabilities. They will come slowly at first, usually focused on a sliver of a task, like checking out those crash photos in the auto department of a property/casualty insurance business. They’ll weed out the totals from the fixable cases at least 10x faster than the sharpest analysts. And that will be good for the analysts, because they can save their eyes and focus on identifying questionable “fringe” cases or troubleshooting pricing issues with the repair shops. But as time goes by, the augmentation devices will take over more and more of the claims process, many analysts will leave and the ones who are left will be doing different kinds of essentially human tasks, the ones where “only humans need apply,” as Tom Davenport has put it.

In a recent study of 500 executives released by PwC, 78 percent of corporate workers “would work with an AI manager if it meant a more balanced workload.” (Not much of a commentary on today’s human managers.) Significantly fewer, 50 percent, “would follow an AI system if it predicted the most efficient way to manage a project.” But whatever the level of acceptance of the machine in these early days, the handwriting is on the wall: We will soon be asked to lead, to partner with and even to follow software programs in ways we have never needed to confront before.

The issue of trust

For any business looking to implement AI-driven applications, a first concern has to be answering the question: “How do our people adapt to work with a computer as a team member?”

The literature on teaming in business is replete with suggestions about the value of collaboration; about the value of understanding the dynamics of building cross-functional teams; and about how to “form, storm, norm, perform” in your business context and across your key workflows. But the many psychologically oriented guidelines boil down in the end to the importance of establishing working relationships based on trust.

How are the development of teams and the establishment of trust going to work in the era of the digital assistant or the digital manager? It can be useful to look at the checkered history of how the issue of trust has evolved with the growing presence of computers in our lives.

In the really early days, back in the 1950s and 1960s, there was an initial flush of belief that if a report came out of a computer, it must be correct and accurate. After all, there were no human accountants or other number crunchers on hand to make the mistakes!

But that initial flush didn’t last too long because people began to pick up transactional errors in financial reports, misspelling of names and addresses in mail and in other identity-oriented contexts, and an inability to get those machine-delivered errors corrected in a timely manner. In short, people discovered that there could be bugs and biases in software, that there could be low-paid clerical typists behind the scenes goofing up names and addresses after all and that maintenance of programs was always deferred in favor of moving on to other things.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues