This article is more than 1 year old

Who should do security clearance checks? Did you say 'chat-bot'? This military slinger hopes so

Study finds emotionless code likely to extract dark secrets

A study linked to a military IT contractor has backed the use of chat-bots for screening US government security clearance applicants.

The fresh research, conducted by Uncle Sam's National Center for Credibility Assessment and military contractor Mantech International, found that subjects were more forthcoming when speaking to an automated chat program than when filling out standard-issue questionnaires.

The credibility assessment center's mission is to teach g-men the skills needed for "psychophysiological detection of deception." Mantech, meanwhile, does everything from multimillion-dollar maintenance jobs on US Navy ships to providing software and IT to intelligence agencies.

For the study, 120 US Army trainees were, one by one, put in front of a computer-generated avatar that quizzed them about their private lives. Beforehand, the subjects were asked to fill out pencil'n'paper questionnaires about their lifestyles.

It emerged that people were more forthcoming about alcohol abuse, drug use, psychological treatment and other sensitive topics to the avatar than on the paper forms.

As a result, the researchers believe a computer agent could in some way replace pencil and paper questionnaires. Of course, Mantech would be just the corporation to supply such software, cough.

"Automating this process using a [computer graphics] interview format could save time, and allow agencies to utilize their human interviewers more effectively," Dean Pollina, of the National Center for Credibility Assessment, and Allison Barretta, of Mantech, claimed in a write-up of their study.

"Automation would also facilitate standardization of the interview questions and procedures, resulting in more objective and equitable hiring decisions for the applicants."

In their report, published in the October issue of the journal Computers in Human Behavior, the duo describe how skin sensors were hooked up to subjects, as well as the custom chat code used.

The agent itself was not a sophisticated AI; it instead relied on a scripted speech tree that would branch to questions based on the answers received. The bot's face on screen was designed to be ethnically ambiguous and did not display emotion – though curiously, a quarter or interviewees claim to have seen an emotional response from the avatar.

Not only were the subjects more open to the software – which ran on an Apple Mac Mini, for what it's worth – but they also tended to be more comfortable with the agent: roughly half said they preferred the computer-graphics interview, and a quarter saying they had no preference.

"We do believe that if researchers are successful at transitioning CG credibility assessment interviews such as the one tested here to applied settings, their use might mitigate some of the gender and cultural biases that could exist when humans conduct these security interviews," the pair wrote.

"From a technological perspective, automated interviews such as the one we utilized here might lead to additional research into the development of more sophisticated CG interview strategies, in part through the use of CG agents with different physical characteristics or culture-specific utterances."

The researchers noted, however, that the study doesn't go so far as to prove that chat-bot agents can replace humans for screening interviews. The questions used in the study were not the same used in standard polygraph tests, and the results would need to be further tested with larger sample sizes and scoring algorithms.

The study does suggest, much to ManTech's delight no doubt, that the screening process could at least be partially automated and used to supplement traditional human interview and analysis. Perhaps rather than filling out questionnaire in the lobby, you could soon find yourself speaking with an avatar. ®

More about

TIP US OFF

Send us news


Other stories you might like