December 14, 2021
Staff Accomplishment

Team Examines Human-Machine Teaming in Law Enforcement

Working to bridge the gap between today’s tools and machine teammates of the future

Corey Fallon, Kris Cook, and Grant Tietje

Corey Fallon, Kris Cook, and Grant Tietje of PNNL examine how scientists and engineers are working to take existing capabilities and turn them into true machine teammates.

(Photos by Andrea Starr | Pacific Northwest National Laboratory)

Human-machine teaming may sound like something from the distant future, but many technological capabilities already exist. In “Human-Machine Teaming: A Vision of Future Law Enforcement” in Domestic Preparedness, Corey Fallon, Kris Cook, and Grant Tietje of Pacific Northwest National Laboratory (PNNL) examine how scientists and engineers are working to take these capabilities and turn them into true machine teammates.

Defining a Teammate

There is more to being a true teammate than just responding to requests from a human. True human-machine teaming enhances the team performance by minimizing the work required for the human to manage the machine.

Many existing artificial intelligence (AI) tools have one or two helpful capabilities, but this is not enough to function as a teammate. As the team states in the article, autonomous systems like drones and self-driving cars are useful but these systems on their own are not teammates. They require the human to regularly monitor their activity to make sure they are functioning properly.

“Human-machine teaming is taking the benefits that AI has to offer and pairing these with human skills like creativity and adaptability,” said Fallon. “Rather than blindly performing tasks, a good machine teammate will learn and work proactively to support their human teammate.”

In their article, the PNNL team wanted to illustrate how a machine teammate could enhance law enforcement in the near future.

“One of our goals was to paint a clear picture of what human-machine teaming is to the first responder community. We wanted to show that human-machine teaming doesn’t mean automation is being developed to replace humans, but rather it is a partnership that maximizes the benefits of both AI and humans,” said Fallon. “We think a lot about what it’s like to work with a human teammate and what a human teammate brings to a team to help it perform better. Successful human teammates need to have the ability to observe, communicate with their teammate, and act independently. We believe observe, communicate, and act are also the key dimensions of a successful machine teammate.”

Observe, Communicate, Act

The key capabilities that define a true teammate are the ability to observe, communicate, and act.

“You need the AI to observe its surroundings, its teammate, and its own performance; but it’s not good enough to just observe. The AI also has to be able to communicate to the human teammate what it is observing,” said Fallon. “The machine teammate also needs the capability to behave proactively so it can act on what it observes without waiting for human instructions.”

Without all three of these pieces, the teammate formula won’t work. For example, if a human just has a machine teammate that can only observe and act, the human may lose track of what it’s doing. That's where the communication is really important. A machine teammate that can update the human can help the team maintain common ground.

“PNNL is on the cutting edge of developing AI for the federal government. We're developing AI and automated systems that are able to perform tasks that were, up until recently, squarely in the domain of the human. Now we’re in a situation where these systems can truly start to interact with us like teammates,” said Fallon. “As we move into developing systems that have greater capabilities, we want to leverage these in the right way to improve task performance. Research in human-machine teaming provides a clear vision to guide the development and design of PNNL’s technological advancements in automation and AI.”