This article is more than 1 year old

US politicos wake up to danger of black-box algorithms shaping all corners of American life

Transparency needed, from privacy to net neutrality

In Washington, DC, on Wednesday, academics and policy wonks warned US Congressional representatives about the perils of inscrutable algorithms, a red flag entangled by tangential worries about privacy, data collection, and net neutrality.

As framed in a letter submitted by the Electronic Privacy Information Center, democracy depends on fairness and transparency, but decisions that affect people's lives have become increasingly automated and opaque.

"It is becoming increasingly clear that Congress must regulate AI to ensure accountability and transparency," EPIC's letter said, with AI here being best understood simply as computer code.

A handful of reps asked for the lecture, in a hearing titled "Algorithms: How Companies’ Decisions About Data and Content Impact Consumers," under the auspices of subcommittees of the House Energy and Commerce Committee.

The concern, as explained by Rep. Greg Walden (R-OR), is that businesses may be sacrificing privacy, security, and fair treatment by relying on inscrutable code to govern interactions and transactions.

"Consumers should remain as safe from unfair, deceptive, and malicious practices by online firms and their algorithms on the internet as they do in the real world," he said in prepared remarks, even as he made it clear he wasn't eager to throw a wrench in the tech job creation engine.

A piece of Pai

Walden also took a moment to condemn the harassment of FCC chairman Ajit Pai and his family arising from Pai's plan to gut net neutrality. He made it clear he considered platform companies like Google, with opaque, questionable content blocking policies, to be the real threat to the net.

The recent hack of Equifax figured prominently, with reps expressing concern that the credit biz and its ilk operate without much regulatory oversight, perhaps forgetting what they were elected to do.

google_vs_ms_648

Transparent algorithms? Here's why that's a bad idea, Google tells MPs

READ MORE

Such companies, said Rep. Mike Doyle (D-PA), "are increasingly using big data and machine learning to make judgements about individuals and their ability to access and use credit."

But crafting legislation to govern the interaction of people, code, and data won't be easy because the interplay can be deceptively complicated, to say nothing of the pushback from companies and their lobbyists eager to avoid burdensome laws.

Catherine Tucker, professor of management science and marketing at MIT Sloane School of Management, observed that algorithmic bias may be a function of market forces – something given much deference in a capitalist system – rather than more undesirable factors like deliberate discrimination.

Tucker in her statement recounted how in recent research, she and Anja Lambrecht from the London Business School found online ads promoting careers in Science, Technology, Engineering and Math (STEM) on Facebook, Google, and Twitter were shown 20 to 40 per cent more frequently to men than women.

The cause turned out not to be that men use these internet sites more than women, or that women choose not to click on these ads as often as men. Nor was the cause traceable to cultural bias.

Advertisers

Rather women saw these ads less than men because other online advertisers were willing to pay more in the ad bidding process to reach women with different ads, thereby making it more expensive for the STEM ads to reach women than men.

"The algorithm is designed to minimize costs, so shows the ad to fewer expensive women than relatively cheaper men," Tucker explained.

But as University of Maryland law professor Frank Pasquale observed, claims of bias are very difficult to adjudicate without access to the underlying code and data. In his remarks, he cites last year's White House Report on Big Data, which notes that analytics has "the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace."

As an example, Pasquale pointed to ProPublica's exposé of how the Facebook ad system – the internals of which are not public – allowed the placement of discriminatory housing ads, which the social media company pledged to fix, but has yet to do.

Pasquale, among others, has been calling for greater algorithmic oversight for the better part of a decade.

Remarks by Laura Moy, deputy director of the Center on Privacy & Technology at Georgetown Law School, suggest it may be a bit longer still before effective rules get put into place, because lawmakers appear to be intent on undoing existing rules. She pointed to the FCC's rollback of broadband privacy protections earlier this year as an example.

Noting that 91 per cent of consumers feel like they have lost control of how their personal information is collected and used by companies and that 68 per cent feel existing laws are inadequate, Moy told government officials that to make things better, they have to stop making things worse. ®

More about

TIP US OFF

Send us news


Other stories you might like