Does Big Tech Have The Right Talent To Win Our Confidence With Its AI Creation?
Authored by Shannon Edwards via The Epoch Times,
The biggest generative AI gaffes of late, including Google Gemini’s text-to-image portrayal of “a Pope” as a woman, our Founding Fathers as Asian and black, and text responses suggesting false equivalencies between high-profile individuals such as Elon Musk and the Nazis, make for click-worthy headlines decrying Big Tech’s “clear bias,” but those don’t even begin to address the larger, and more nuanced, issue at hand.
The reason is that these hiccups resulting from an “over-correction” of Gemini’s output by its maker could be considered, in Silicon Valley parlance, a “feature” and not a “bug” of generative AI. And unless Big Tech rethinks its talent base and what constitutes “fair” and “equitable” in designing the rules around AI, we can only expect the problem to persist, and to be far harder to identify and root out in the future.
It’s important to acknowledge, though, that the development of “rules” in the creation of generative AI is not at its core scandalous, nor a secret. The entire industry, from nascent AI startups to behemoths such as Google, has been open about the more philosophical and nuanced work required to create AI innovation. Often referred to more specifically and functionally as “Responsible AI,” this work determines the “problems” that need to be addressed before the work of machine learning even begins. It’s a process and a competency that is arguably new to all tech companies playing in this space.
For Google, the work to create responsible AI “principles” began years ago, and the heads of this area share the details of their work freely. What we should take note of is that Google has solidified its “AI principles” and began training employees in the concepts as early as 2019. You can even find details about approaches and activities, such as “Moral Imagination workshops,” to see the depths of their commitment to this work. The relevance here, of course, is that Google is not the United Nations, nor even a good representative of the United States. With nearly 190,000 employees worldwide, 75 percent of Google’s employees are estimated to be younger than 30 and are self-reported as about half white and hovering around 33 percent female.
Just the fact that 7 percent of Google’s employers are older than 40 seems an exceptional disconnect when you consider that a large percentage of their demographic also identifies as Democrat, a party steadfast in its defense of President Joe Biden’s age and mental acuity at 81.
You’ll also see stated on page 11 of Google’s recent diversity report that 7 percent of employees are self-identified as “LGBQ+ and/or Trans+,” but nowhere in the 115-page document will you find mention of age or diversity outside of the standard few we’ve come to accept: race, gender, LGBTQ+, and sometimes disability or veteran status. It does raise the question: Whose eyes are we seeing this new world of AI innovation through? And what are the bigger implications for what is fair and “true”?
An even stickier is question whether these companies have the capacity or interest to change. We have all heartily bought into the mythology of the hoodie-wearing tech “bro” made famous by Mark Zuckerberg—and forever embedded in our cultural anthology via the film “The Social Network”—and the rigid framework for hiring that has been a point of pride for Silicon Valley companies for decades now.
Many of us worked within a rigid hiring framework that screens for “approved” schools or includes having employees partake in mental gymnastics no matter the job for which they are being hired. Recently, former Google employee and Silicon Valley marketing veteran Luanne Calvert shared in her TEDx Berlin talk an anecdote about barely “getting through” the hiring process herself. As a graduate of a less-prestigious college, it was only her unique skills and deep marketing experience that allowed her to be categorized as an “exception”—or, as she describes it, “an experiment.”
And although the hiring practices have changed (a bit), as Ms. Calvert notes in her talk and I’ve seen via my former colleagues in Silicon Valley, you won’t find a recruitment push anytime soon for, say, an over-60 conservative. But I hope that what we will find is that without the broadest subset of thinking represented, AI won’t meet its potential; the unpredictability of these tools will continue to foster discussion, and an inability to fully commercialize if the audience is narrow will force a reckoning.
Perhaps, in the end, it will be capitalism that ultimately saves Google from itself.
* * *
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.
https://ift.tt/Q5TRIOt
from ZeroHedge News https://ift.tt/Q5TRIOt
via IFTTT
0 comments
Post a Comment