AI Weekly: Algorithms, accountability, and regulating Big Tech

Attend Transform 2021 to learn the key topics in Enterprise AI & Data. Learn more.

This week, Mark Zuckerberg, CEO of Facebook, Sundar Pichai, CEO of Google, and Jack Dorsey, CEO of Twitter, returned to Congress, the first hearing with big tech executives since the January 6 uprising, led by white supremacists, that directly threatened the life of the legislature. The main topic of discussion was the role social media plays in spreading extremism and disinformation.

The end of liability protection under Section 230 of the Communications Decency Act (CDA), disinformation, and how technology can harm children’s mental health were discussed, but artificial intelligence took center stage. The word “algorithm” alone has been used more than 50 times.

While previous hearings had more exploratory questions and felt like repairing Geek Squad tech was in line with guidelines, lawmakers at this hearing asked questions based on evidence and appeared to be treating tech CEOs like hostile witnesses.

Representatives repeatedly cited a May 2020 article in the Wall Street Journal about an internal Facebook study that found that the majority of people who join extremist groups do so because the Facebook recommendation algorithm suggested it. A recent MIT Tech Review article on Focusing Bias Detection to Appease Conservative Lawmakers Rather than Reduce Disinformation also appeared as lawmakers repeatedly claimed that self-regulation is no longer an option. Practically throughout the five-hour plus hearing there was a tone of unvarnished disgust and contempt for exploitative business models and a willingness to sell addictive algorithms to children.

“Big Tech is essentially giving our kids a lighted cigarette in the hope that they will remain addicted for life,” said Rep. Bill Johnson (R-OH).

In his comparison of big tech companies with big tobacco – a parallel drawn on Facebook and a recent AI research report – Johnson quotes the then representative. Henry Waxman (D-CA), who in 1994 stated that Big Tobacco was “exempt from the standards of responsibility and accountability that apply to all other American companies”.

Some congressmen proposed laws requiring tech companies to publicly report diversity data at all levels of a company and preventing targeted ads that relay misinformation to marginalized communities, including veterans.

Rep. Debbie Dingell (D-MI) proposed a law that would create an independent organization of researchers and computer scientists to identify misinformation before it goes viral.

MEPs Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) pointed out YouTube’s recommendation algorithm and its well-known tendency to radicalize people, and introduced the Act to Protect Americans from Dangerous Algorithms in October to Section 230 and allow the courts to review the role of algorithmic reinforcement leading to violence.

In addition to reforming Section 230, one of the most popular solutions proposed by lawmakers was a law requiring tech companies to run civil rights checks or algorithm checks on performance.

It may be cathartic when tech CEOs, whose attitudes are described by lawmakers as complacent and arrogant, receive accusations of inaction on systemic issues that threaten people’s lives and democracy because they would rather make more money. But after the bombastic and bipartisan insight into how AI can harm the people on display on Thursday, the pressure is on Washington, not Silicon Valley.

I mean, of course, Zuckerberg or Pichai will still have to answer for if the next white supremacist terrorist act takes place and it is directly traced back to a Facebook group or YouTube indoctrination, but to this day lawmakers have no record of that they have passed comprehensive laws regulating the use of algorithms.

The bipartisan agreement to regulate facial recognition and data protection has also not yet paid off with extensive legislation.

The mentions of artificial intelligence and machine learning in Congress are at an all-time high. In the past few weeks, a national panel of industry experts has called for AI policy action to protect the national security interests of the United States, and Google employees have urged Congress to pass stricter laws to protect those who choose to use AI use to harm people.

The details of a legislative proposal will show how serious it is for lawmakers to hold those who create the algorithms accountable. For example, diversity reporting requirements should include breakdowns of specific teams working with AI in big tech companies. Facebook and Google are posting diversity reports today, but those reports don’t break up the diversity of AI teams.

Tests and agreed standards are at stake in industries where products and services can harm people. Without an environmental impact report, you can’t break new ground on a construction project, and you can’t sell drugs to people without going through the Food and Drug Administration. Hence, you probably shouldn’t be able to freely use AI that is reaching billions of people that is discriminatory or turns extremism into profit.

Of course, accountability mechanisms designed to increase public confidence can fail. Do you remember Bell, the California city that was regularly audited but nonetheless turned out to be corrupt? And algorithm audits don’t always evaluate performance. Even if researchers document a tendency to harm, as the analysis of Amazon’s Rekognition or YouTube radicalization showed in 2019, that does not mean that AI is not used in production today.

There is some kind of regulation coming, but the unanswered question is whether this legislation goes beyond the solutions advocated by the tech solutions CEOs. Zuckerberg spoke out in favor of federal data protection laws, just as Microsoft has done in battles with state lawmakers trying to pass data protection laws. Zuckerberg also expressed some support for testing algorithms as an “important area of ​​study”; However, Facebook is not conducting systematic reviews of its algorithms today, despite the fact that it is recommended by a Facebook civil rights review that was completed last summer.

Last week, the Carr Center at Harvard University released an analysis of human rights impact assessments (HRIAs) that Facebook carried out on its product and presence in Myanmar following a genocide in that country. This analysis revealed that a third-party HRIA largely omits mention of the Rohingya and does not assess whether algorithms played a role.

“What is the connection between the algorithm and genocide? That’s the whole point. The UN report claims there is a relationship, ”co-author Mark Latonero told VentureBeat. “They said Facebook was a major contributor to the environment in which hateful language was normalized and reinforced in society.”

The Carr report says that any policy that requires human rights impact assessments should carefully consider such corporate reports as they tend to be ethical and “hide behind a veneer of human rights due diligence and accountability” .

To prevent this from happening, the researchers propose to conduct analyzes throughout the life cycle of AI products and services and to confirm that in order to center the effects of AI, algorithms must be viewed as socio-technical systems that are evaluated by social and computer scientists have to. This is in line with previous research that insists AI is viewed like a bureaucracy, as well as with AI researchers working on critical racial theory.

“Determining whether or not an AI system has contributed to human rights harm is not obvious to those without the appropriate expertise and methodology,” the Carr report said. “Without additional technical expertise, those who conduct HRIAs could not themselves recommend potential changes to AI products and algorithmic processes to mitigate existing and future damage.”

The fact that several members of Congress spoke about the persistence of evil in big tech this week seems to make policymakers aware that AI can harm people, from spreading disinformation and hatred for profit to endangering children, democracy and economic competition. If we can all agree that big tech is indeed a threat to children, competitive business practices, and democracy when Democrats and Republicans fail to take enough action, it could be timely lawmakers who are deemed untrustworthy.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark The Machine.

Thank you for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital city square for tech decision makers to gain knowledge of transformative technology and transactions. Our website provides important information on data technologies and strategies to help you run your business. We invite you to become a member of our community and access:

  • current information on the topics of interest to you
  • our newsletters
  • gated thought leader content and discounted access to our valuable events such as Transform 2021: Learn more
  • Network functions and more

become a member

Comments are closed.