McKenzie Jackson | California Black Media
Sofia Mbega’s first exposure to technology – more specifically, Artificial Intelligence (A.I.) — happened years before she moved from East Africa to the Golden State.
Mbega was a student at the University of Dodoma in Tanzania, when her mother, Gloria Mawaliza, suggested she take a technology course after learning about computer science from co-workers at the international children’s nonprofit World Vision. Mbega, now a Stockton resident, said taking courses in software engineering, which she received a degree in 2015, was previously unheard of in Tanzania.
“We were the first batch of students,” Mbega said of herself and her classmates. “It was a new profession for my country.”When she learned about A.I. systems, a topic that continues to grab headlines across the U.S. with experts and pundits wrestling with its merits and dangers, Mbega was intrigued. “I was so excited,” she recalled. “But I did not picture things would be like this. I thought A.I. would only be something to help software engineers.” The technology has moved well beyond that purpose.
A.I. floodgates opened into the mainstream of human consumption late last year with the release of the generative A.I. ChatGPT, which uses natural language procession to create humanlike conversational dialogue for public use. A.I.’s popularity has spearheaded discussions on how chatbots and other A.I. applications like face recognition and A.I. voice generator will impact the workforce, educational systems, entertainment, and individuals’ daily lives.
Despite only accounting for a small percentage of the technology sector workforce, Black women like Mbega, a 31-year-old independent data analysis contractor, are constantly assessing the positives and negatives of A.I. and what it is like to work in the industry.
Mbega, a California resident since 2018 and member of Black Women in A.I., a three-year-old organization that aims to educate and empower Black women, helps.Although she is still excited about A.I., Mbega says alarm bells are ringing.
If you ask large language model-based chatbots like ChatGPT a question. It will answer. People have used A.I. to do draft emails, compose music, write computer code, and create videos and images.
Mbega worries that bad actors could use A.I. for nefarious reasons. “Someone can make a video of someone saying a crazy or bad thing and people will believe it,” she said.
Oakland resident Joy Dixon, a software engineering manager at Hazel Health and the founder of Mosaic Presence Inc., is concerned about students becoming too dependent on A.I. to do educational tasks such as write papers and solve problems.
“How much is it really advancing them?” Dixon asked. “Is it doing us a disservice that we won’t see now, but maybe in five to 10 years?” Her main concern with A.I. though is prejudices present in the technology. “A.I. is built on models of people, and people have their own biases and challenges,” Dixon said. “Computers aren’t neutral.”
There are documented instances of A.I. image generators producing distorted or stereotypical images of Black people when directed to create an image of a “Black” or “African American” person. The technology has created images depicting Black people with lighter skin tones or non-Black hair.
In July, Bloomberg analyzed more than 5,000 images generated by Stability AI’s Stable Diffusion and revealed that the text-to-image model amplified stereotypes about race and gender. It portrayed individuals with lighter skin tones as having high-paying jobs and people with darker skin tones having occupations such as dishwashers, janitors, and housekeepers.
Google disabled it’s A.I. program’s ability to let people search for monkeys and gorillas through its Photos app eight years ago because the algorithm was incorrectly putting Black people in those categories. A.I. developers have said they are addressing the issue of biases, but Dixon, 53, who has worked in tech since 1997, believes the problem will persist unless more people of color participate in constructing the systems A.I. technology is built upon.
“When car airbags were first released, they killed more women than saved women because nobody tested them on crash dummies that were the size of women,” she said. “There is similar concern about A.I. If you are only building models with a certain subset of the demographic, then you are leaving whole groups out.”
Gov. Gavin Newsom signed an executive order on Sept. 6 to examine the use, development, and risks of A.I. in the state and to shape a process for deployment and evaluation of the technology.Newsom called A.I. “transformative technology” and noted that the government sees the good and bad of A.I. “We’re taking a clear-eyed, humble approach to this world-changing technology,” he said.
Dr. Brandeis Marshall, a data scientist and professor at Atlanta’s Spelman College, said Black women in technology have skills equal to or better than their counterparts, so more should be involved in the construction of A.I. systems. However, they do not get the same opportunities. “I meet plenty of Black women who have all the chops, but they haven’t been promoted,” she said. “You tend to be the only one in the room.
Black Women in A.I. founder, Angle Bush of Houston, said Black women can contribute much to A.I. “We have had to be innovative,” she said. “If we don’t have something, we figure out a way to create it. There are a lot of ideas that haven’t come to fruition because of lack of access and opportunity. It has nothing to do with our aptitude.”
Mbega believes the technology can be groundbreaking in health care and help identify ailments such as brain cancer.
Marshall said any discussions of A.I. systems taking over the world like in a Hollywood blockbuster are overblown. “Right now, we get inundated with all the cool things,” she said. “Then, we seem surprised that there are harmful things. Let’s get a 360 view before we put all of our chips in one basket.”