Hot take: I’m not ready to hop on the AI hype train just yet.

Don’t get me wrong, I understand how the possibilities of AI are there, even if much of it hasn’t been quite realized yet. AI has the potential to make our lives easier. AI will definitely increase productivity. Even if the mention of ChatGPT makes you want to roll your eyes so hard they fall out of your head and you won’t have to be subjected to another useless AI-generated “news article” littering Google’s page one search results, the technology is accelerating at an exponential rate and there’s no turning back now.

It’s just that for me, the very real harms—such as exacerbating racial discrimination in policing and surveillance and scraping struggling artists’ works without consent, to name only a few — still far outweigh AI’s currently novel but ultimately still tepid applications.

Reflecting on Three Sessions from Three 2023 Salesforce Dreamin’ Conferences

A rising tide will carry all boats, but what if you don’t have a boat to begin with? What if you are already struggling to keep your head above water, and the deluge comes?

And yet, after recently wrapping up a hat trick of conferences (Midwest Dreamin’, WITness Success, and Mile High Dreamin’) where the topic of  this year’s sessions and keynotes were inevitably dominated by some permutation of AI (an in particular, GenAI), three sessions that I attended have managed to, if not entirely change, then at least somewhat soften my bearish stance on the whole thing.

At the very least, they’ve given me a lot of food for thought around AI and accessibility: who gets to have it, who doesn’t, and what that ultimately means.

“Generative AI and Ethics: Safeguarding Privacy and Nurturing Trust in the Salesforce Ecosystem”

Robert Wieland, Mile High Dreamin’ 2023

At this year’s Mile High Dreamin’, Robert Wieland, an AI Ethicist and Senior Salesforce Engineer at Verisk Analytics, led his audience through a brief but fascinating history of AI development,  which actually started as early as 1966 with the creation of ELIZA, the world’s earliest AI chatbot. His tour through AI history’s highlights and lowlights centered less on the latest groundbreaking developments and more so on the philosophies, ethical concerns, and questions that have arisen as a result, which made for a refreshing change from the usual shock and awe sales pitches I usually hear when it comes to the wonders of AI.

AI’s ethics framework can trace its roots to the 1979 Belmont Report, which laid the ethical foundation for human subjects research in medicine and the social sciences:

Human Autonomy, or the respect for people’s decisions and not injecting bias or manipulation into the decision-making process

Beneficence, or how to minimize harm while maximizing human well-being and benefits

Justice, or how to ensure equitable access and the equal and greatest distribution of benefits

 “In the realm of AI, ethics involves the thoughtful consideration of the potential impacts of AI technologies on individuals, society, and the environment,” Wieland said. “It prompts us to assess how these technologies align with our shared values and to ensure their responsible development and deployment.”

When it comes to predictive AI and language models, the ethical and social risks are not insubstantial, Wieland went on to explain, something we’ve unfortunately already begun to see play out in an alarming number of ways: from producing discrimination, exclusion, and toxicity to being used for misinformation by malicious actors to the incidental environmental harm as a byproduct of the sheer amount of processing power needed to run these increasingly complex processes to the unintentional harm caused by humans overly trusting a language model or treating it as human-like.

While he’s not looking at the AI world through rose-colored glasses, ultimately Wieland ended his presentation on a more optimistic note. There are, after all, a number of very smart people trying to steer this ship for whom these ethical considerations are always top of mind, including Paula Goldman, Salesforce’s Chief Ethical and Human Use Officer, and Kathy Baxter, Principal Architect of Ethical AI Practice. Together, they put out five guidelines for responsible generative AI development to act as Salesforce’s North Star.

While it’s reassuring to know that Salesforce wants to responsibly balance innovation with ethics, I’m more skeptical than Wieland on this front: I can only place so much trust in a corporate or governmental entity’s ability to be self-accountable to their self-proclaimed principles, something for which even Salesforce is not without controversy. Wieland himself noted,  “Even if you’re following the law, you can do things where people get queasy,” in reference to the 2012 controversy where Target used customer data to predict when someone was pregnant based on their shopping behavior and market to them accordingly. As we’ve seen many U.S. states begin to roll back LGBTQ+ protections and legal access to abortion, just because something is law does not necessarily mean it is ethical or just. How will the power of AI be wielded in those instances?

“Tech for Good: AI’s Role in Uplifting Marginalized and Underserved Communities”

Jaye Cherenfant, WITness Success 2023

Jaye Cherenfant is a Salesforce Administrator, tech enthusiast, and AI strategist who spent over a decade empowering Black students in the U.S. and South Africa before founding her own sustainable gardening business and then later launching Vista Tech Solutions, LLC, a tech consulting company.

As a Black, neurodivergent woman in tech, Cherenfant understands the vital importance of leveraging technology for beneficence, especially when it comes to serving the marginalized and underserved. In her session, she was especially concerned with how AI can inherit societal biases, further discrimination, and lead to data privacy violations that disproportionately impact BIPOC communities.

One of the best ways to begin to address these concerns, Cherenfant argued, is to ensure that tech teams, especially AI teams, diversify: Black people need a seat at the table.

But that’s far easier said than done. As of 2021, Black representation made up less than 10% of the STEM workforce, while Black women only represented 2% of the tech industry. These figures also reflect the STEM pipeline, where Black students make up only 9% of STEM degrees at all levels. According to a report from Jobs for the Future, the primary reasons for STEM underrepresentation are “systemic and structural barriers that Black learners confront from an early age into adulthood.” This includes a lack of access to a quality education and resources, which, in the U.S., are allocated based on wealth.

According to 2023 Pew Research, over half of Black households make less than $50,000 in the U.S, with 30% of Black households making less than $25,000.

One cascading effect of these inequities, Cherenfant pointed out, is a growing Digital Divide between children from low-income households and their more affluent peers, a disparity that worsened during the Covid-19 pandemic. According to the Pew Research Center, almost 60% of lower income families experienced at least one of the following digital access obstacles during the COVID-19 school shutdowns:

Having to use a mobile phone to complete schoolwork

Needing to use public Wi-Fi to complete schoolwork due to unreliable or no internet connectivity at home

Being unable to complete schoolwork due to not owning a computer

As the pace of technological innovation keeps accelerating, those who cannot access the knowledge and tools needed to contribute to these advances in hopes of shaping the discourse, nevermind merely being able to keep up with them, will fall further behind and eventually be shut out of these critical spheres altogether.

So where does that leave us? Where can we even begin to address these challenges? Cherenfant advocates that one’s activism can begin locally, from collaborating with her children on generative AI art projects to volunteering at local groups and community-driven events to introduce the community, and especially the youth, to the world of AI and practical AI applications. Giving underrepresented groups access to, knowledge of, and skills to use AI is the first step to giving them that much-needed seat.

My feelings on this are, as ever, somewhat mixed. On one hand, giving marginalized people equitable access to privileged white spaces to empower themselves and others  is crucially important to AI’s future and mitigating the growing harm that systemic biases like flawed racial profiling software and “predictive policing” algorithms are perpetuating.

On the other hand, as Audre Lorde said, “The master’s tools will never dismantle the master’s house.” Can we ignore how, despite assurances, companies are actively replacing or attempting to replace human creative labor, including already underrepresented Black creative labor, with generative AI to the point where even Hollywood has sat up and taken notice? Or how these technologies still betray their systemic bias even when they are being used by Black creators because of the inherently biased data sets they’re trained on?

I don’t know what the right answers are, or if there even are any to be had right now. If the goal laid out in our AI Ethics framework is to make sure AI is doing the greatest amount of good with the least amount of harm, what is an acceptable level of harm and who gets to decide what that is?

“How to Create Accessible Digital Marketing Assets”

Cara Weese, WITness Success 2023

While Cara Weese, CRM & Marketing Automation Strategist at Sercante (and, in full disclosure, one of my most favorite coworkers ever), did not directly address AI during her presentation, her topic was one that runs in the same circles of AI discussions nevertheless: accessibility, and in this case, specifically for those with disabilities.

Weese set the stage for her presentation by sharing her own powerful story as a person with a disability, which further drove home the point that people with disabilities aren’t an imaginary segment of the population to be treated as an afterthought or, worse still, acceptable collateral damage if the cost proves too high or the effort too bothersome to be ADA-compliant. In fact, according to the World Health Organization, 1.3 billion people, or 16% of the world’s population, have a significant disability. 

If us marketers don’t center accessibility-first strategies in our work, Weese said, we not only exclude a not insignificant portion of the population, we also risk a number of repercussions from missing opportunities to expand our customer base, create positive associations with our brand, encourage inclusivity in others, and improve our quality ranking score and SEO. 

And if that wasn’t convincing enough, businesses who fail to comply with ADA regulations are liable for some hefty penalties should their non-compliant practices be reported.

As Donald A. Norman, author of the influential book Design of Everyday Things, points out, “Designing for people with disabilities almost always leads to products that work better for everyone.” Using large, legible fonts and high contrast in our emails not only helps those with visual disabilities, but consider how much the elderly with failing eyesight would also appreciate these design choices. Or how about the fact that we’re all having to turn on the subtitles to watch TV shows and films these days.

Even in this, class is inextricably entwined with accessibility, furthering the Digital Divide. Assistive technologies such as screen readers are a helpful device for the visually impaired to navigate the digital world, but their high price point can pose a significant barrier for lower income households. And even if one were able to secure a lower cost device, as Weese explained, newer, more expensive screen readers are often more capable of parsing web pages and emails than cheaper ones, even when the content doesn’t entirely meet web accessibility requirements.

Conclusion

I’m not anti-AI.

I’m already eager to see how AI will get better at making inaccessible digital content accessible. I’ve already played around with gen AI for coding and generating seed data for Salesforce imports. I’m looking forward to trying out Jaye Cherenfant’s method of using gen AI to study for her Salesforce exam.

But what is more important to me than what I want AI to do is how I want it to be used — and how I don’t want it to be used. I’d love to see the rich experience of the marginalized to be welcomed and included in AI’s development to not only empower those communities but to improve the accuracy and power of AI itself. I want to see AI close the gap between the privileged and the underserved.

I don’t want to see the worst consequences of AI fall upon the most vulnerable among us: those with lower incomes, those who have been excluded from consideration, those who will suffer the most from climate change, and those who are primed to be heavily exploited by richer, and vastly more powerful entities.

The tide is rising, and the sea is rough. If we can’t stem or even slow it down, then at the very least, I hope we have the courage and strength to pull others out of the water and into the boat with us on our way up.

Original article: Navigating the Tumultuous Waters of AI, Ethics, and Accessibility

©2023 The Spot. All Rights Reserved.

The post Navigating the Tumultuous Waters of AI, Ethics, and Accessibility appeared first on The Spot.