My one and only Google Badge. March 2011 — March 2022.

Leaving Google at the Convergence

Reflecting on 11 years and the gifts of Responsible AI

Tracy Pizzo Frey
12 min readMar 29, 2022

--

I start every job I’ve ever had with the same two goals: work myself out of a job, and leave before I am done. Arriving at the combo of “I have” and “it’s time” means, unsurprisingly, that a chapter is closing. On March 2nd, 11 years and 2 weeks after receiving my “Welcome to Google!” letter, I turned in my badge and said goodbye to an organization I never thought I would know so well and for so long.

While I learned and grew tremendously from all of my Google experiences, the last four and a half years with Cloud AI & Industry Solutions put decades of product and go-to-market strategy skill building to work. While this alone would have been a fascinating ride, it would not have been a life-changing one.

But it was.

In this role, all of my experiences as woman in mostly male environments and my many, ah, circuitous career paths — the ones that have provided endless fodder for the “share a fun fact about yourself” moments — converged. This confluence surfaced my life’s mission: Responsible AI.

In this role I found myself dusting off my degrees in Gender Studies & Sexuality and Educational Psychology, looking deeply, with new inquiry, at the incredible lessons from my time with the unparalleled Carol Gilligan. I dug up my old lesson plans for teaching literature to middle and high-schoolers as we sought to unpack meaning, understanding, and belonging from what words and stories allowed us to see. I revisited what I learned about empathy creation from immersing children in nature, traditional wilderness skills, and community building. I retraced my baby steps in anthropological ethnographic research. I called on human-centered design lessons from Stanford’s d.School. Everything suddenly had meaning and relevance.

Prioritizing the full range of the human experience in building advanced technology is deeply inspiring, highly complex, immensely challenging and unbelievably worth every second. Every time we bravely embrace the complexity and create the space to honestly confront, engage with ethical considerations and responsible use of technology, the benefits are exponential. When we do this right, our frames of reference open and we understand that where we may have formerly drawn the line around our scope of responsibility no longer makes sense. We realize that in order to build the most advanced, innovative, safe, robust and successful technology we need to deeply explore the sociotechnical landscape. We need to honestly confront the blind spots we may not even know we have because of what is considered “normal” and what is “other” in our world. Even if we really don’t like what we see. When we do this, change can, and will, happen.

When I joined Cloud AI in September of 2017, my first self-appointed project was to find a meaningful way to assess long-term impact of AI as an integral part of product development. I knew this needed to be grounded in core values, and uncovering what mattered to our organization — Courage, Integrity, Compassion, and Impact — created the foundation for an evaluative process I started testing out early in 2018. This grew into Cloud’s Governance processes for operationalizing Google’s AI Principles.

These processes hold within them the work that has made me the most proud, the most curious, the most compassionate and the most courageous. And these last four and a half years have confirmed for me time and time again what began as a hypothesis for Cloud in 2017–that Responsible AI is synonymous with and inseparable from Successful AI.

I can’t even believe my luck that I got to spend our earliest months with the extraordinary Shannon Vallor, who helped us to refine our process into a systematic, repeatable, and robust framework. Shannon taught me how to engage in courageous conversation about ethics and responsibility in technology in ways that simultaneously met people where they were while not diminishing what I needed to communicate.

I am so grateful to many, many remarkable colleagues in Responsible Innovation, Machine Learning Fairness and Interpretability, the heroic efforts of Google’s Human and Civil Rights team who (among other things) helped us formalize the critical work of external Human Rights Impact Assessments, the tireless public policy and global standards teams helping to bring Responsible AI to the fore of what I believe will be some of the most critical policies of our time, and the mission critical social scientists, tech ethicists, and human rights researchers whose knowledge and voices sit at the intersection of what AI needs to embrace and the path to success.

I can’t even start on my own team, folks. Yeah, we had the longest team name ever (Outbound Product Management, Engagements & Responsible AI —aka OPERA. Obviously.) but if I could entrust the future of our world to anyone, it would be this crew and OK FINE I’m a bit biased here, but also it is true. Sorry, I don’t make the rules.

I am also forever grateful to two women who have been some of my biggest teachers: Meg Mitchell and Timnit Gebru. I remain deeply upset by how they left Google, and I missed them tremendously over the last year. I know my silence hurt them, as did some of the ways I most tried to support them both. I also know I made the choices I did consciously and with reasons I stand behind. As hard as it was, and remains today, I do not regret those very personal choices. I am also sorry. I recognize that the ability to choose is a privilege, and while I know I can hold the tension of confidence in my choices, the regret I have about them, and even the pride I have in what I know I did behind the scenes, I also know there are more lessons for me to learn. In the end, the experiences these women had brought me face to face with whether it was time for me to leave Google, and no matter where any of our futures take us, I treasure the lessons they gave me, directly, indirectly, in partnership, and in adversity. I will continue to learn from both of them throughout my career.

Rajen Sheth. You are unparalled. Forever.

Rajen showing Cloud AI how Karaoke is done (yes, I got his permission 😇 ). London, 2018

As I embark on my next chapter as a Founding Partner at Uncommon Impact Ventures, where among other things I will be building a Responsible Technology practice, I want to share some lessons and truths that represent my hopes for any organization — including Google — in engaging in this work.

5 Lessons.

  1. Technical excellence is not enough. AI is sociotechnical — the technology cannot be separated from the humans and human-created workflows that inform, shape and develop its use. Truth be told, most AI systems are built on many individual pieces of humanity (aka data). This data is built on persistent historical (all the way up to the present moment) information, and encodes the societal systems, power dynamics, and human choices in those histories. Humans tell AI what it is seeing, and AI then tries to match this against present, real world information. When there is a match, success! When there isn’t, the system fails. When this represents whole groups of humanity data the system is missing or cannot see, the system fails consistently and disproportionately. This creates harm.
  2. Fairness is contextual. Fairness needs to be defined for every applied use of AI. When you do this, you become aware of the harms you could create, and only then are you able to solve for them. Then you can build the best solution, creating the most value, benefit and (BONUS!) measurable impact. By the way this is where tooling comes in — it isn’t a solution on it’s own.
  3. Without active harm correction, progress is incremental. Even if you do a robust assessment, if you don’t work to solve for what is invisible and missing in your data, any mitigations for your historical data alone will at best be incremental. Incremental improvement in what you can see, combined with unmitigated harms in what you can’t will be far more harmful and costly than the upfront work of assessing and correcting.
  4. Change is inevitable. The rate of innovation in AI continues to outpace Moore’s Law. Even if the rate of technological change slows down, societal change does not. What may have felt like an acceptable choice years ago does not mean it remains so in perpetuity. We need to approach use of advanced technology with the humility and curiosity of lifelong learners. This can be deeply anxiety provoking in the world of enterprise technology, however, I’ve found that engaging in this complexity with customers grows, rather than erodes, trust.
  5. AI is fallible. AI isn’t magic; it is math. Because the math contains the sociotechnical nature of AI, it isn’t always right. Not by a long shot. This is why AI should not be the sole or primary basis of decision making, especially when those outcomes can significantly impact humans, society or the environment. I’ll go a step farther and say that having a “human in the loop” is not enough in these situations. Meaningful human control and multiple sources of information are critical in guiding decisions.

Three Truths.

1. Process matters

Writing down your values for use of advanced technologies is incredibly important, but never sufficient. You need to operationalize those values through clear, repeatable, and dedicated governance processes. Commitment to process also solves two critical challenges.

First, an established process removes personal beliefs from the driver’s seat. In our reviews, hearing and sharing personal lived experiences helped to situate the committee in particular harms or opportunities, and these informed our assessment against each AI Principle. The process then applied guardrails to ensure that input was contexualized through our analyses.

Second, without a repeatable process that is followed at all times by those involved (which does not mean it is static — it should adjust to new information and realities, it just needs to be re-ratified when it does), it can be far too easy create risk to your credibility should something come into question.

2. Transparency is a balm

This one can be fraught, but I really don’t think it needs to be. You will never find universal agreement with evaluation outcomes (and I would argue this should be a non-goal). The noble goal, in my opinion, is to provide clarity about the process and your commitment to it. This creates a shared understanding of how evaluations happen, what was considered, and actions you took to align to your charter, mitigate harms and realize opportunities. This allows for disagreements about outcomes to be matched with trust that the process is always thorough, thoughtful, and action-oriented. To be clear, transparency does not mean sharing everything all the time. Not all information is shareable, nor is all information relevant.

Here’s the thing. When meaningful process exists, but transparency is not proactive, distrust has its strongest breeding ground.

In the absence of meaningful transparency, distrust drives people to seek out any information they can find and draw their own conclusions about what is happening. This might be inaccurate or incomplete, but how is one to know if if there is no transparency? If concerns are raised and lack of transparency persists, everything gets worse. This is dangerous, and it is toxic. Building proactive and dependable transparency into your governance will propel trust in your work.

There is a worse situation, which is when information is complete but the recommendations were not acted on, either in part or entirely. Perhaps there are totally valid reasons for this; if so, your process should capture those. In situations where there are no meaningful reasons for lack of action, this can be a signal that information that should be known is being hidden. When inaction is surfaced despite you, not by you, it is not transparency, it is lack of trust made manifest. This creates reactivity, which by extension fuels defensiveness, leading to further distrust. Now you are in a vicious cycle that is difficult to break. This is made even worse if lack of action is not directly addressed in a meaningful and honest way.

When transparency is structured around the process, honors the reality that universal agreement is not the goal, and provides dependable, relevant and shareable information (bonus points for being clear when there is confidential information you will not share, and any valid reasons for inaction), confidence grows and you can do your best work.

3. Everything depends on psychological safety

Those working in Responsible AI need to engage with some of the most complex, sensitive, and often deeply personal realities of systemic racism, misogyny, hate and injustice in all forms, unfair bias, human rights violations, geopolitical realities, cultural norms & practices, disproportionate outcomes across impacted users, and ALL THE THINGS ALL THE TIME. When you focus on consistently assessing, prioritizing and growing psychological safety, extraordinary things can and will happen.

Trust. You understand trust in a work environment in an entirely new way. This also enables you to see far more quickly when it is bruised, battered or broken, and not just within this work but everywhere.

Listening. You learn to listen. I mean really listen. Sometimes, this means listening to what is unsaid or feels too difficult. This in turn allows you to create the space and time for those things to surface, and when they do I promise they are critical to success.

Courage. You normalize difficult conversations. Everywhere. I have found that I am able to have what would otherwise have been extremely challenging conversations in all parts of my work, and also at home with my family, friends and loved ones.

Awareness. Once you see harms, you never unsee them again. This creates a paradigm shift that becomes a ripple effect. You start to make different decisions. You educate others about why. You actively look for other places where the same harms might show up and you see them there too. And then you see how pervasive they are, and how harmful they can be, and you can never go back to making decisions without that full context because you know that they won’t be the best decisions you can make.

Inclusion. Most importantly in my opinion, psychological safety creates the conditions under which those of us who come from positions of privilege, power or authority see how we benefit from being the “norm” in societal and sociotechnical systems. We might not participate consciously, but the very fact that we get the option of unconscious participation in systems that benefit us while they disproportionately harm others begs an honest reckoning of what we are continuing to enable every day. Addressing this takes organizational commitment, and it requires individual work. I’m speaking to my fellow white folks here: the work needs to be done by us, and to create the change that fuels success, it needs to be about us as individuals too. This opens the door for inclusion. When inclusion is combined with psychological safety, well then you get belonging.

For me personally, this has meant I can no longer go through life not questioning the systems I interact with, or react poorly if I’m given the gift of being questioned about them, just because they work well for me. I can’t assume this means they work as well, or at all, for everyone. Understanding this doesn’t mean I now need to try and solve every systemic or societal challenge through a narrow thing I’m trying to create. That is not the goal. What is the goal, however, is that when I have all of the information I can possibly gather to the best of my abilities, I can make smarter choices about what I am able to approach with what I am doing, and I can better understand how to maximize the opportunities before me and mitigate the harms I might otherwise create or exacerbate. It means that I start to see, more and more, the intersections and interconnections–the societal map–and how all the myriad pathways running every which way can, at times unknowingly, build on each other and how this can create exponential, intersectional, and disproportionate harms within marginalized communities and groups.

Advanced technologies really can be used to create extreme harm. They also can be used to correct, heal and restore. You can have the latter without the former, but you have to work for it. I choose the latter.

I’m going to save my excitement about sharing what the Uncommon Impact crew is up to (I mean, this post does NOT need to be any longer…), so for today I’ll leave you with this.

Excellence is doing a common thing in an uncommon way.

-Booker T. Washington

Uncommon Partner Love

--

--

Tracy Pizzo Frey
Tracy Pizzo Frey

Written by Tracy Pizzo Frey

Founding Partner, Uncommon Impact Ventures. Founder, Restorative AI. One time dancer, teacher, forest explorer, Googler. Forever mom of 2 not-so-small Freys.

Responses (1)