Ayanna Howard Ayanna Howard

Robotics Expert Ayanna Howard Explains How to Tackle Bias in AI

Artificial Intelligence (AI) is everywhere. Algorithms govern the advertising we see, the TV we watch, the people we listen to online. They work out whether patients deserve life-saving drugs — and how self-driving cars should react in moments of danger.

This ubiquity leads to problems. One AI was found to discriminate against women and people with non-European sounding names. Another, working in criminal justice, mislabeled Black defendants as “high risk” at twice the rate it did so for white defendants.

All of which begs the question: is a healthy relationship with AI possible?

Robotics expert Ayanna Howard, currently Chair of the School of Interactive Computing at Georgia Tech, and formerly Senior Robotics Researcher at NASA, believes so — providing we make oversight more systematic.

 

In a recent podcast with machine learning researcher Lex Fridman, she noted that companies like Facebook and Twitter often offer coders financial rewards for finding security bugs that could compromise the system. So why not do the same for ethics?

“Find an unfairness hole, and we will pay you X for each one you find,” she said in the podcast. “Why can’t they do that? It’s a win-win: [corporations] show that they care about [bias], and that it’s important, and they don’t have to dedicate their own internal resources.

“And it also means that everyone who has their own bias lens — like, I’m interested in age, and so I’ll find ones based on age, and others on gender — means you get all of these different perspectives.”

At the same time, she called for a more open culture among the larger corporations. “Be good citizens, say we have a problem, and we are willing to open ourselves up for others to come in and look at it, and not try to fix it in house, because there’s conflict of interest,” she said.

“Just say: Community, we have this issue. Help us fix it. And we will give you the bug finder fee.”

 

Coders should think like doctors, says Ayanna Howard

Howard was quick to remind developers that their algorithms could be used by autonomous vehicles, which then may cause an accident, or hospitals, that may deny someone life-saving medication.

She therefore called for a medical-grade commitment to ethics as people programme robots and computing devices.

“Medical doctors have been in situations where their patient didn’t survive. Did they give up and go away? No. Every time they come in, they know that there might be a possibility that this patient might not survive,” she said. “So when they approach every decision, that’s in the back of their head. Those are tools – they’re given some of the tools to address that, so they don’t go crazy. But we don’t give those tools to developers.

“We should think: I have a great gift, and I can do good, but with it comes great responsibility. That’s what we teach in medical schools.”

 

Ayanna Howard with robot

Photo credit: Georgia Tech

 

Education and AI

In 2013, Howard set up Zyrobotics, which provides “diverse learners” aged three to six with early STEM education technologies to make learning to code fun. These include the AI-powered Zumo Turtle, Tommy the Robot, and the Fire Fighter.

The company was the result of funding from the National Science Foundation, which gave her the space to study how robots could be used for therapeutic rehabilitation.

It was also powered by a lifelong belief that when kids are young, and fearless, they should be exposed to challenging ideas and problem-solving opportunities that build “life long confidence for tackling challenges of any kind.”

Howard is keen to keep AI integrated in education. On the podcast, she notes that there’s a lot of school districts that don’t have enough teachers, so the student to teacher ratio is too high for effective learning. She says that robotics could be brought into classrooms and afterschool activities to “offset the lack of resource in certain communities,” by helping teach certain activities.

 

Politics and AI

Howard also said that AI systems could be used for workforce retraining, particularly in response to incoming job loss.

This is something key in current discourse. US Presidential candidate Andrew Yang earned international acclaim for warning that AI — and particularly automation — would completely rewrite the modern economy. Yang has just set up a nonprofit, Humanity Forward, which will test the concept of universal basic income on a city-wide level — his antidote to job losses through automation.

Howard said that while she was sure new jobs would emerge in an automated world, she worried about how these changes would displace workers — particularly those with lower levels of education that haven’t necessarily specialised in AI.

https://globalshakers.com/us-presidential-candidate-andrew-yang-launches-nonprofit-to-rewrite-economy-humanity-forward/

“Will they have the ability, the background, to adapt to those new jobs? That I don’t know,” she said. “That could make even more polarization in our society, internationally, everywhere.

“I also worry about not having equal access to AI and all the wonderful things it can do.”

And on whether we should have an AI for President of the United States? An unequivocal no, she said, as it’s humans that are on the end of decisions made by world leaders — and citizens need to see leaders as humans.

“But I do believe a President should use an AI as an advisor,” she added. “If you think about it, every President has a cabinet with different individuals that have different expertise that they should listen to. And you put smart people with smart expertise around certain issues, and you listen.

“I don’t see why AI can’t function as one of those smart individuals giving input.”

Leave a Reply

Your email address will not be published. Required fields are marked *