Alex Hanna博士是社会学家和研究科学家,致力于Google的机器学习公平性和道德AI。 (Dr. Alex Hanna is a sociologist and research scientist working on machine learning fairness and ethical AI at Google.)
Before that, she was an Assistant Professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto. She received her PhD in Sociology from the University of Wisconsin-Madison.
在此之前,她是多伦多大学传播,文化,信息和技术研究所的助理教授。 她获得了威斯康星大学麦迪逊分校的社会学博士学位。
To learn more about Dr. Alex Hanna’s background and work, you can check out her personal website and follow her on Twitter.
要了解有关Alex Hanna博士的背景和工作的更多信息,您可以查看她的 个人网站 并在 Twitter上 关注她 。
1.您在公司中扮演什么角色? (1. What’s your role within the company?)
I’m a research scientist within the Ethical AI team at Google. The Ethical AI team focuses on ensuring that AI is deployed in ethical and fair ways. People on our team have been focused on a few different domains around fairness in algorithmic systems, and understanding transparency in models and data, and also trying to understand various ways of reporting all those levels.
我是道德AI团队的研究科学家 在Google。 道德的AI团队致力于确保以道德和公平的方式部署AI。 我们团队中的人们专注于围绕算法系统公平性的几个不同领域,理解模型和数据的透明性,并试图了解报告所有这些级别的各种方式。
I’m the team’s first sociologist, and a lot of what I do is focusing on understanding the assumptions of data that are used in machine learning systems, where the data comes from, and the sorts of considerations that are given when it comes to the data that machine learning models are trained on.
我是该团队的第一位社会学家,我所做的很多工作都集中在理解机器学习系统中使用的数据假设,数据来自何处以及在涉及到数据时要考虑的各种因素。训练机器学习模型的数据。
2.您的背景是什么? 您是如何参与这项工作的,直到今天您所处的位置? (2. What’s your background? How did you get involved with this work and end up where you are today?)
My PhD is in Sociology, and I started getting involved with AI ethics around 2017, when I attended a multi disciplinary workshop in the Netherlands with a few people that are in the space. I started wanting to get involved a bit more, and so I started to read the literature on it.
我的博士学位是社会学,我于2017年左右开始涉足AI伦理学,当时我参加了在荷兰举行的一次多学科研讨会,当时在场的人数很少。 我开始想更多地参与其中,因此我开始阅读有关它的文献。
Before I was at Google, I was a professor at the University of Toronto’s Institute of Communication, Culture, Information and Technology. Then I came to Google, first in a different position; but then, I moved into the research direction, as I was already doing that work, which is how I found my way into the role.
在加入Google之前,我曾是多伦多大学传播,文化,信息和技术学院的教授 。 然后我来到Google,先是担任另一个职位。 但是后来,由于我已经在从事研究工作,所以我朝研究方向发展,这就是我找到角色的方式。
Google is very collaborative. I really appreciated that I’ve had the opportunity to work with a lot of different people, and to do work that I found quite important, and to publish it in venues which I thought were important. When I was at Toronto, I was not really having the sort of conversations that I was wanting to have, so that move has been excellent.
Google非常合作。 我非常感谢我有机会与许多不同的人一起工作,并做了我认为非常重要的工作,并将其发布在我认为很重要的场所。 当我在多伦多时,我并没有真正想要的那种对话,所以这一举动非常出色。
The team is really great — it’s got people from a wide variety of backgrounds, and is racially and gender diverse, probably more than any other team I’ve ever worked on, inside academia or outside.
这个团队真的很棒-拥有来自不同背景的人,种族和性别也各不相同,这可能比我曾经在学术界或外部工作过的任何团队都要多。
3.您在公司内部如何运作?您的日常工作是怎样的? (3. How do you operate within the company and what is your day to day work like?)
Our team is a research team that sits within Google Brain. Google Brain is oriented such that you can create things or do research that doesn’t necessarily have to be connected to product; but because of who we are and what we work on, we’re often in many conversations about products and policy. So, we can do both original research which may not have a direct bearing on product and what products do, but also work that has very serious policy and product implications.
我们的团队是位于Google Brain内部的研究团队。 Google Brain面向您,您可以创建不一定要与产品相关联的事物或进行研究; 但是由于我们是谁以及我们在做什么,我们经常在关于产品和政策的许多对话中。 因此,我们既可以进行可能与产品没有直接关系的原始研究,又可以进行产品的作用,但是也可以进行对政策和产品有非常严重影响的研究。
There’s also a lot of different stuff that we do both internally and externally related to policies around fairness and data ethics that people have come to our team to ask about. One thing I’m working on right now is thinking about how we annotate for gender in machine learning systems.
我们在内部和外部也有很多不同的东西,涉及到公平和数据道德方面的政策,人们来找我们的团队来询问。 我现在正在做的一件事是考虑如何在机器学习系统中为性别添加注释。
Right now, machine learning systems such as most facial recognition or facial analysis systems look at a face and make some sort of judgment about gender, which is nonsensical because you can’t judge gender from someone’s face, gender identity is an internal state — what you’re getting at is simply something more like gender expression. Google has already stepped away from building a public API gender classifier, and has removed the gender terms from their vision APIs.
目前,诸如大多数面部识别或面部分析系统之类的机器学习系统会看着一张脸,并对性别做出某种判断,这是荒谬的,因为您无法从某人的脸上判断性别,性别认同是一种内部状态–您得到的仅仅是性别表达。 Google已经远离建立公共API性别分类器的步骤,并已从其视觉API中删除了性别术语 。
This and similar things are in no small part due to our team. So we’re continuing that work, and we’re trying to come up with internal and external guidelines; because you shouldn’t annotate gender in order to build a classifier, and you need to take into consideration what the purpose of the system would be, and how it can potentially have detrimental downstream effects. So that’s one project I’m involved with.
这和类似的事情在很大程度上要归功于我们的团队。 因此,我们正在继续进行这项工作,并试图提出内部和外部准则; 因为您不应该为了建立分类器而对性别进行注释,并且需要考虑系统的目的以及它可能对下游产生不利影响的原因。 这是我参与的一个项目。
We focus on original research, but with real product and policy impact.
我们专注于原始研究,但会对产品和政策产生实际影响。
4.您或您的团队所影响的积极变化的具体例子是什么? (4. What’s a concrete example of a positive change you or your team influenced?)
Model Cards is a framework that we published on two years ago at ACM FAT* (now ACM FAccT). It’s a way of reporting on models, and their performance across different demographic groups and also particular ethical considerations for models. That work was published in an academic venue, but then within the last year it was adopted in many different places. For instance, Google Cloud has two public APIs that now have public model cards, so anybody can go in and look at how they do across different population subgroups, and report out what they’re doing and how well they do.
模型卡是我们两年前在ACM FAT *(现为ACM FAccT)上发布的框架。 这是报告模型,模型在不同人口群体中的表现以及模型的特殊道德考虑因素的一种方式。 该作品在学术场所出版,但在去年被许多不同的地方采用。 例如, Google Cloud有两个公共API,现在有公共模型卡 ,因此任何人都可以进入并研究它们在不同人口子群体中的表现,并报告它们的工作状况和运行状况。
The model card work has also led to new technical infrastructure, namely Fairness Indicators, that allows for statistics that are a part of the framework to be computed more automatically. And, the framework outlines the steps that are necessary if you’re going to do this work, and what you need to consider — it’s not just pushing a button and seeing how your thing does, and then walking away from it. You have to think deeply about the model, and how it’s being used in practice. So that itself is something that’s very appealing to particular teams.
模型卡的工作还导致了新的技术基础结构,即公平性指标 ,它允许更自动地计算作为框架一部分的统计数据。 而且,该框架概述了如果您要进行此工作所需的步骤以及需要考虑的内容–它不仅是按一个按钮并查看您的工作方式,然后再走一步。 您必须深思模型,以及如何在实践中使用它。 因此,它本身对于某些团队来说非常有吸引力。
5.做这项工作或担任这个角色最让您惊讶的是? (5. What’s surprised you the most about doing this work or being in this role?)
I’ve been happily surprised that there’s a team that I’ve found that is very interdisciplinary, and where I’m still very welcome and happy. And this is my first job outside of academia, so I guess I was surprised that there’s a lot of interest in this kind of work.
令我感到惊讶的是,我发现有一支非常跨学科的团队,而且我仍然非常欢迎和高兴。 这是我在学术界以外的第一份工作,所以我想我对这种工作引起了极大的兴趣,对此我感到惊讶。
I guess it’s not surprising that some of the more logical conclusions of this work clash with the imperatives of working in a corporation in a capitalist economy, but I also think that part of this work is about trying to get people to the point of realizing that.
我想这项工作的一些更合乎逻辑的结论与在资本主义经济中在公司工作的必要性相冲突并不奇怪,但是我也认为这项工作的一部分是试图使人们意识到这一点。 。
For instance, Manny Moss, Jake Metcalf, and danah boyd wrote this article that talks about what they call ethics owners in tech corporations, that is, people in my type of roles. They’ve tried to go through what this looks like across different companies by performing qualitative interviews.
例如, 曼尼·莫斯 ( Manny Moss) , 杰克·梅特卡夫 ( Jake Metcalf )和丹娜·博伊德(danah boyd)撰写了这篇文章,讨论了他们所谓的科技公司的道德所有者 ,即担任我这种角色的人。 他们试图通过进行定性访谈来了解不同公司的情况。
And it confirms my intuitions when they say that there are limitations — what they’re effectively saying is that ethics owners are trying to do X, and guide product and policy development in particular ways, but market fundamentalism and tech solutionism resist those pushes.
当他们说有局限性时,这证实了我的直觉-他们实际上是在说道德负责人试图做X,并以特定的方式指导产品和政策的开发,但是市场原教旨主义和技术解决方案却抵制了这些推动。
6.您最大的挑战和收获是什么? 您认为科技公司有可能关心道德并认真对待吗? 我们是否会达到伦理学家跨团队扎根的地步? (6. What have been your biggest challenges and takeaways? Do you think it’s possible for tech companies to care about ethics and take it seriously? Will we ever get to a point where ethicists are embedded across teams?)
The challenge of building ethical tech is that we are embedded in a system of racial surveillance capitalism. No one necessarily goes out and intentionally tries to make an AI that works poorer for people with darker skin, or to return search results that show sexualized images when you search Black or Asian girls. As Ruha Benjamin and Safiya Noble, amongst other critical tech and race scholars have shown, these machines are racist in their effects but typically not their intent. And that manifests in myriad ways because Google Search and AI are sociotechnical systems.
建设道德技术的挑战在于,我们必须融入种族监督资本主义体系中。 没有人一定会走出去,有意尝试使一种AI对皮肤较黑的人表现较差 ,或返回当搜索黑人或亚洲女孩时显示性图像的搜索结果。 正如鲁哈·本杰明 ( Ruha Benjamin)和萨菲娅·诺布尔 ( Safiya Noble )以及其他批判性技术和种族学者所表明的那样,这些机器的作用是种族主义的,但通常不是意图的。 由于Google搜索和AI是社会技术系统,因此这以多种方式体现。
It’s not just the training data that these systems are based upon, but because these systems are developed within this social and economic system. However, the incentives to act ethically can’t be left only to negative sanctions like brand risk. That’s why there needs to be regulation for data protection and AI — because it presents a way to actually force Silicon Valley’s hand on it.
这些系统不仅基于训练数据,还因为这些系统是在此社会和经济系统中开发的。 但是,采取道德行动的动机不能仅归因于品牌风险等负面制裁。 这就是为什么需要对数据保护和AI进行监管的原因-因为它提供了一种实际迫使硅谷介入的方法。
7.您如何处理未认真对待自己的声音或工作的情况? 您有关于如何处理的建议吗? (7. How do you deal with situations where your voice or work isn’t taken seriously? Do you have advice for how to deal with that?)
I suppose I just get louder. But also finding people who will be advocates or champions. I think that’s a good strategy [though] building those networks takes time.
我想我会变得更大声。 而且还要找到将成为倡导者或拥护者的人。 我认为这是一个好策略,尽管建立这些网络需要时间。
8.您希望在此空间中的哪个区域能发挥更大的作用或希望看到改善? (8. What’s an area in this space that you wish you could make more of an impact in or want to see improve?)
I think that a lot of the vision of this work is still that people consider it to be highly technical, when it shouldn’t be considered technical. There’s plenty of work that’s been done in this space that really comes from different fields. Researchers need to be humble, especially technical researchers.
我认为这项工作的很多愿景仍然是人们认为它是高度技术性的,而不应将其视为技术性的。 在这个领域已经完成了很多工作,这些工作实际上来自不同领域。 研究人员必须谦虚,尤其是技术研究人员。
And of course I’m going to say this, because much of the research I do is non-technical, but there’s a plethora of work that’s been done in science and technology studies and sociology and whatnot that has a lot of bearing on this, and I feel like those ideas need to be taken seriously.
当然,我要说的是,因为我所做的大部分研究都是非技术性的,但是在科学技术研究和社会学领域已经做了大量的工作,而与此相关的事情很多,我觉得这些想法需要认真对待。
If you just think it’s a technical problem, you’re missing the forest for the trees.
如果您只是认为这是一个技术问题,那么您就错过了森林。
9.您仰望谁? 谁激励你? (9. Who do you look up to? Who inspires you?)
A lot of the folks driving the AI ethics conversations are women, especially Black women, women of color, and queer folks. A lot of the folks who are doing the heavy lifting, or have the most nuanced and creative views are that.
推动AI伦理对话的很多人都是女性,尤其是黑人女性,有色人种和同志。 许多正在做繁重工作或拥有最细微和最有创意的观点的人就是这样。
This is a funny thing to say, but I really respect my manager, Timnit Gebru (Twitter). In 2016 she went to NeurIPS and it was all these white guys, and afterwards she started Black in AI as a group to advocate for more black AI researchers, and it’s awesome. It took an immense amount of work. And instead of just being like, I don’t want to work in AI, she was like, let’s just change this entire field. So I really respect her and her advocacy. She’s been an immense advocate for me in social science research, and is just a great advocate in general.
这句话很有趣,但是我非常尊重我的经理Timnit Gebru ( Twitter )。 在2016年,她去了NeurIPS ,所有这些都是白人,然后她在AI中成立了Black,作为一个团队来倡导更多的黑人AI研究人员,这很棒。 花费了大量的工作。 她不只是想不想,我不想在人工智能领域工作,而是让我们改变整个领域。 因此,我非常尊重她和她的倡导。 她一直是我在社会科学研究领域的大力拥护者,而且总体而言只是一名伟大的拥护者。
I have immense amounts of respect for the organizing work that Tawana Petty has done in Detroit, both with the Detroit Community Technology Project and the Our Data Bodies project, along with the rest of her collaborators in ODB: Tamika Lewis, Mariella Saba, Seeta Peña Gangadharan, and Kim Reynolds. Mariella is also involved with the Stop LAPD Spying Coalition, who have been out in front fighting policing and surveillance technologies.
我对Tawana Petty在底特律所做的组织工作深表敬意,包括底特律社区技术项目和我们的数据机构项目,以及她在ODB中的其他合作者:Tamika Lewis,Mariella Saba,SeetaPeña Gangadharan和Kim Reynolds。 玛丽埃拉(Mariella)还参与了制止LAPD间谍联盟 ( Stop LAPD Spying Coalition)的工作 ,后者在前线警务和监视技术方面一直处于领先地位。
I also really respect danah boyd, who I know has been doing this work for years, and Joan Donovan (Twitter), who started the Critical Internet Studies Slack workspace, is someone who’s also been at this for quite some time. And my friends who have known me for like a decade, like Anna Lauren Hoffmann (Twitter), who helped bring me into this work. She’s a professor at the University of Washington.
我也非常尊敬danah boyd ,我知道他从事这项工作已经很多年了,而开始了Critical Internet Studies Slack工作区的Joan Donovan ( Twitter )也已经从事了很长时间。 还有我认识我已有十年之久的朋友,例如安娜·劳伦·霍夫曼 ( Anna Lauren Hoffmann , 推特 ),他帮助我从事了这项工作。 她是华盛顿大学的教授。
10. 对于想要参与但不知道如何开始的学生,应届毕业生或技术工作者,您有什么建议? (10. What advice do you have for students, new grads or tech workers who want to get involved but don’t know how to start?)
I think what they should do is work on assessing the field. Some part of that is trying to understand what kind of subsection of the work really interests them — it could be something that’s technical, legal scholarship, social theory, political philosophy, or critical race theory, or sociology of gender.
我认为他们应该做的是评估该领域的工作。 其中的一部分是试图弄清作品的哪一部分真正让他们感兴趣—可能是技术,法律学术,社会理论,政治哲学,批判种族理论或性别社会学。
There’s lots of different angles in which to come from it, and there’s just so much work that still needs to be done, especially in translating some of the concepts from one field to another.
它有很多不同的角度,还有很多工作要做,尤其是将某些概念从一个领域转换到另一个领域时。
Folks need to explore the different angles and find what they’re interested in or passionate about, and then from there, read what they find, follow citation trails, and find the scholars, organizers, and activists who are doing work in this space.
人们需要探索不同的角度,找到自己感兴趣或感兴趣的事物,然后从那里阅读发现的事物,遵循引文线索,并找到在此领域开展工作的学者,组织者和活动家。
感谢您收看采访并支持该系列! (Thanks for checking out the interview and supporting the series!)
This project was started by Tiffany Jiang and Shelly Bensal out of curiosity. Even before we graduated and started working in tech, we asked ourselves: Who are the ethicists working within tech companies today? Which companies offer such roles or teams? How much of the work is self-initiated? Lastly, what does “responsible innovation” or “ethics” work entail exactly?
这个项目是由Tiffany Jiang和Shelly Bensal出于好奇而发起的。 甚至在我们毕业并开始从事技术工作之前,我们就问自己: 今天在技术公司工作的道德主义者是谁? 哪些公司提供此类角色或团队? 多少工作是自我发起的? 最后,“负责任的创新”或“道德”的工作究竟意味着什么?
We hope this series can serve as a helpful resource to students, new grads or anybody wishing to do work in this space but don’t know how to get involved. If you have any thoughts or comments you want to share with us, we’d love to hear them. Let us know if there’s someone you’d like us to interview next!
我们希望本系列可以对学生,应届毕业生或希望在此领域工作但不知道如何参与的任何人提供有用的资源。 如果您有任何想法或意见要与我们分享,我们很乐意听到。 让我们知道是否有人想要我们接下来面试!
Twitter: (@EthicsModels) | Email: ethicsmodels.project@gmail.com.
Twitter: (@EthicsModels) | 电子邮件:ethicsmodels.project@gmail.com。
翻译自: https://medium.com/ethics-models/interview-with-dr-alex-hanna-researcher-on-googles-ethical-ai-team-28f61d8b3a33
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/388243.shtml
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!