Trends in Corporate Data Ethics

Trends in Corporate Data Ethics

Last weekend I joined the first webinar of the Business Data Ethics Month by The Ohio State University. Thanks Merve Hickock for sharing the event details here on LinkedIn.

The session was moderated by Dennis Hirsch and included the following members of the Ohio State research team, who shared their key findings:

Interestingly (aka sadly), Facebook was one of the research sponsors. However, I truly enjoyed the webinar and learning about the research finings which was very informative. I look forward to attend the next one on Monday.

The report on Emerging Trends in the Governance of Advanced Analytics and AI is available HERE. Next topics are the following:

  • Management Approaches | 12th Oct
  • Technologies for the Ethical Deployment of Algorithms | 23rd Oct
  • Regulation of Algorithms: Policymaker & Expert Perspectives | 29th Oct

Senator Chris Coons, who seems genuinely concerned about the intersection of technology and civil rights, opened the session.

“Congress has not done enough to ensure our laws meet the demands of the 21st century. AI brings real benefits and possibilities. We know it comes with real risks. We need an approach that continues protections that have been a core part of our civil rights to protect people agents unjustified mistreatment. I’m particularly concerned in the ways in which unknown blackbox algorithms that make critical decisions may impact citizens of the world.

Algorithms and AI are increasingly being used to decide who gets access to education, housing, and credit. Studies have shown that algorithms can enforce exhausting societal discrimination. It can be invasive and manipulative. That’s a lot of why I’m so excited for this series and to see the initial product around developing a framework.

Congress needs to take your input. We need to move forward with ethics and values that reflect Americans rather than another nation. Much of the digital economy has gone unregulated and unchecked. Our privacy laws are not meeting the reality of the 21st century. Privacy laws are built almost exclusively on the consent framework. We have all probably scrolled through terms and conditions without reading them or understanding them.

Millions of Americans are routinely giving away rights without understanding. With AI and algorithms this does not protect us at all. Algorithms are too complicated and make these decisions on the backs of consumers. We need federal legislation to provide a more reliable framework for consumer protection. I’m working on a bill that would regulate these determinations to make sure they are fair and comply with our civil rights.

This would allow the FTC to ensure this does not injure consumers. It would provide protection for individuals so they can understand how their data is being used on critical decisions that affect them. I want us to develop a legislative framework that’s usable and flexible for the developments to come in years and decades ahead. One challenge around legislation highly technological aspects is that Congress typically doesn’t revisit things for decades.

On the intellectual property subcommittee we’re talking about an act that’s decades old now. I want to work and make sure we get this right so I welcome the input of everyone participating today. As we move forward we need to do so in a way that protects our values. If the US will continue to be a world leader in AI, consumers will need to trust the products being brought to market. We have peer and competitor countries implementing similar tools without regard for core values.

Their alternative and competing models are achieving rapid adoption globally. I think the time is long past for Congress to ensure we can reap the rewards of these technologies without losing our chance to lead on the global stage.

In talking to senators about this (algorithmic eligibility determinations), they don’t realize ways in which complex AI makes possible making connections between different things to apply something with discriminatory intent and impact in a way that would be considered outrageous and unacceptable. This is a critical civil rights issue. It’s a little technical and hard to explain to the average person who isn’t informed about the digital economy but it is a central civil rights issue. ”

Dennis Hirsch introduced the research background:

“Amazon gets tens of thousands of employment applications a year. A few years ago they developed an AI tool to sort through resumes. It trained the tool on the resumes of a largely male workforce.

The tool penalized resumes that used the word “women’s.” Amazon spotted the gender bias problem and abandoned the project. This brief story encapsulates the motivations behind our research project. It shows the promise and the dangers of advanced analytics and AI. It shows the importance of governance to spot these and help people. You can regulate this.

We need a deep understanding of how to do this type of corporate governance. This is missing in the literature today and what we tried to develop in this study. We had to put together an interdisciplinary team. Over the past two years this team interviewed privacy managers and consultants who are known as leaders in the area.

We explored central questions. How do leading companies see the threats of advanced analytics and AIs? Why are the companies seeking to reduce these threats? What technical solutions are they using? Our team’s presentation today will follow this structure. Before I turn to the team, let me tell you what we found. The law lags advanced analytics and AI.

Compliance with the law is not sufficient to prevent the harms these technologies can create. Some companies want to do more than the law requires. This is data ethics. They are going into the realm of ethics. As scholars of regulation, we recognize this as a form of beyond compliance business behavior. We have seen this in the environmental area when a company reduces it’s carbon footprint even though the law does not require them to do so.

Unlike damage to the environment, AI risks are more subjective and dependent on what people think society should be. There’s no avoiding the normative questions here. Companies have to grapple with what it means to do the right thing. We saw them doing that. They developed standards for what it means to make responsible decisions about AI projects that might pose risks to others. Even if they might get those decisions wrong. What does it mean to be a responsible actor in this area? ”

David Norris, a PhD candidate in sociology, shared what they learned about the risks:

“I think the risks are getting a lot of attention and it’s bringing the enthusiasm to the potential down to earth. When we went into this we thought about the key risks like invasions of privacy and discrimination of protected classes. Our interviewees has a larger constellation of concerns. They went into thinking about manipulation of vulnerabilities. Did I choose to buy a product? Or was I susceptible to it? Others talked about opacity. Decisions from advanced analytics tools are difficult to explain and to contest. Especially if a consumer feels they have been wronged.

This got companies thinking about errors. There’s the issue of false positive and negatives but also the errors that permeate data. The likelihood of there being incorrect information arises. One in four people have errors in their credit reports. The last concern came up less frequently but companies were thinking about how the advancement of AI might be outstripping society’s ability to metabolize this growth.

As we are developing tools like driverless cars, what does that mean for people that drive cars and the economy built around employment? How risks are conceptualized is interesting. When we moved to our survey what came out of that was more interesting. We had a menu of risks and asked which were receiving a lot of attention in their company. We see that 80% of our respondents are really dealing with privacy. 50% had discrimination receiving attention internally.

Displacement of labor and other issues are uneven across companies. Why this is important, this unevenness is that not every risk is relevant to every company but the risks they feel they are engaging with are important to why and how they may or may not go about managing those risks. ”

Tim Bartley, a professor of sociology at Washington university in St Louis, talked about why companies pursue data ethics when the law doesn’t require them to do so.

“The report discusses this question in a variety of ways. I’ll try to boil it down to three points and hope you read the elaborations in the report. It’s clear from our interviews and survey that companies are seeking to protect their reputations. “If we have a reputational hit . . . that’s something we want to avoid.” Our survey showed that companies that said their industry has faced pressure were 1.5-1.8x more likely to have a policy related to data ethics.

Some talked about the reputation with regulators, business partners, particularly for business to business firms, and importantly, the reputation with potential employees. There’s two markets you compete for, one is to sell products, the other is to attract talent. Companies are trying to shape or prepare for regulation. Almost 70% of our sample agreed that some kind of state level regulation is likely to happen in the coming years. 50% agreed a federal regulation is coming.

They’re going to work to influence the development of those frameworks and develop their own standards. Outside the US, the EU’s GDPR looms large over this space. We found some interesting things about what is most influential for many companies. It’s mostly the GDPR’s question about legitimate interest balancing test that have hepialid some companies. We found some evidence that companies are thinking about data ethics initiatives and policies as ways to expand their ability to use data.

This is an interesting strategic complement. ”

Piers Turner, an associate professor of philosophy at OSU, talked about what they learned about substantive benchmarks.

“First, the thing to emphasize is the appreciation by the people we talked to. Even if they are motivated by strategic considerations they accept that they need to engage in this beyond compliance thinking. That’s really interesting. That means they have to engage this space whatever their motivation happens to be. If they’re going to protect their reputation they have to engage questions about advanced analytics or AI.

They have to think about what it is to be a responsible company in this beyond compliance space. What is the standard? Our report summarizes findings in each of those areas. We have a summary of some of the leading frameworks things like autonomy, fairness, equality, diversity, safety, and transparency. Many of these get repeated in different frameworks.

They help with issue spotting and help companies think about what they shouldn’t be doing. The third element is thinking about this responsible decision making part and the need for judgment. There we also saw people struggling with how you apply these general principles or ethical thoughts to particular cases. How do we weigh them? This became interesting because what we actually saw them doing was relying on intuitive judgments. Asking themselves, what would my mother say about this? Does it feel right?

Does it pass the ear test? Does it sound right to us? I’m sympathetic to this. There’s a need to holistically come up with a judgment. We have started to say how companies are grappling with this and we saw them having this difficulty. I think a real question is, what standard are they trying to track? They were interested in doing the right thing. That was a common phrase.

When you ask them what they were trying to do, it seemed like what they were trying to track more was what an informed public would hold them accountable for doing or not. That’s not exactly the same as doing the right thing but it depends on the public’s views on what is right or not. They’re not completely unrelated. I would end with the point to say that I think we found any comprehensive data ethics view that will address itself to these companies will have to do two things. Address specific harms and the leading principles and also it will have to provide some guidance on how to take those considerations all together to make decisions in a reliable way.

What might those processes look like that would go beyond using these intuitive judgments? ”

Aravind Chandraskaran, an associate Dean at Fisher, talked about the management processes we saw companies using with respect to their data ethics efforts.

“In our conversations and on the survey we found three important trends in terms of the management systems. The first is organizational structure. Who is responsible for the entity? We find a common place where data ethics belonged was in the privacy area. We find things are moving towards strategy and technology so they are better informed in terms of making decisions. A lot of these decisions are made on technologies and innovations that are happening.

It’s important for the management structure to resonate. We see companies moving toward having the technology group within their organization. This goes beyond compliance. The second thing is who is responsible for this? We see an increasing trend of a Chief Data Ethics Officer. You have to have an entity closer to the governing body and you have to elevate that role. We see consistent increases in the rise of these officers who are directly reporting to the CEO and they can make quick decisions.

Sometimes, decision making has to be done very quickly. If it is buried in the organization to make quick decisions is hard to do. We find these newly emerging titles will do this much quicker. The second thing we see is how do spot them? Most of these organizations that deal with big data problems are big organizations. One form is the idea of hub and spoke. I’ll have information brought from up and downstream and supply chains because I’m not always connected to my users and then bring them up front.

There’s also the importance of having an external advisory board. The field of data ethics and the technology changes quickly and constantly. How do they get the information up to speed to make better decisions? We see the idea of having an external board to make better decisions to spot ethical dilemmas. The last thing is the increase in frequency. I always talk about that it’s important to spot these things at a high frequency.

The frequency of informal information about how this is posing problems allows companies to spot these issues in a timely manner. Increasing frequency on conversations, even if informal, allows them to bring problems up to speed. The last thing is how do I react? That’s about having an internal advisory committee. It’s critical to have various representation on these committees. Having legal experts, a technologist, and a variety of people making better decisions.

These are some preliminary evidence around management systems that companies are actually using. ”

Srini Parathasarathy, a professor of computer science and engineering and biomedical informatics and the director of Ohio State’s data mining research.

“Another point I’d like to make is that privacy and ethics are not necessarily synonymous. Many interviewees point out that data can still be used in ways that is ethically problematic. The second broad theme that emerged is algorithmic fairness technologies. Several of our interviewees brought up the importance of this area. They point out that much of this work is in its early stages. A couple of interviewees point out the need for inclusiveness of marginalized groups that become very important particularly in times of crisis, like we are experiencing now.

The third thing is the need for these increasingly complex AI technologies and algorithms to be explainable and to understand some of the risks associated with their use. Several interviewees emphasized the importance of this particular aspect to facilitate procedural fairness and trust. Others have pointed out that companies may have to go beyond explainability and have to understand the risks and understanding the failures.

The fourth theme we heard was the importance of the emerging area of algorithmic auditing. These tend to be complex processes that can produce different risks at each step. Companies need to be able to audit these algorithms. Organizations from the survey respondents and the interviewees are looking at this kind of work as an important area to consider. Finally, we heard from many companies about the different types of systems technologies they are using.

Both for access control and ensuring the data is insulated and making sure only the right people have access to the right data. ”

Q&A

QUESTION #1

“Please don’t kill the golden goose of AI with stifling regulations before it can lay its golden eggs.” From what you saw from these companies, do you think regulation requiring companies to do more in this area would kill AI or help it?

ANSWER #1

Aravind Chandraskaran: “It’s very subjective. What I have seen is simple rules. Simple rules we have seen that companies are proactively doing like having checklists. When they are working with sensitive data are they complying to things their customers or stakeholders think are the right thing to do? Will I talk about this to my mother? Companies are taking those tests and making them rubrics that allow them to catch problems earlier.

We can see that even before some of the regulations existed that we had simple tests that didn’t impact things like trade. It made it better. As companies see this and regulatory bodies have more reforms passed, it will only help us and not really hurt us. We’re doing the right thing by catching defects before they impact our customers or stakeholders. ”

QUESTION #2

“I’m the privacy tech lead at Nexus Research. We support synthetic data efforts and I’m also the technical lead at [Inaudible.] Privatized data is a civil liberty but it’s important to understand the interaction between it and algorithmic policy making. Have you looked at privatized data?”

ANSWER #2

Srini Parathasarathy: “Trying to understand, there are two points I want to make, companies are certainly aware of these. The first is that just because you ensure privacy or in compliance with law does not mean that the way you’re treating the data is necessarily ethical. Understanding that it goes beyond privacy is very important. It is very important for ethical governance of data. It’s a necessary condition, it’s not sufficient. Understanding the implications between how data is privatized and used, and by used I mean for algorithmic decision making.

Several of our interviewees also point this out. There are questions about is this use of this data for the societal common good? Is it appropriate to use in this way? I think companies are wrestling with this. We do cover some of this in the report. There’s more work to be done. “

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *


WordPress Video Lightbox Plugin