A new series of courses at Athabasca University aims to deliver the skills needed to address the ethical development of artificial intelligence (AI) systems.
The AI ethics program will be fully available online next year. It will consist of four courses covering data, machine learning and robotics.
The series is delivered by PowerED, an entrepreneurial unit within Athabasca University. The program is the first micro-credential of its kind in Canada, according to the Alberta university.
The term AI refers to the capacity of computers to simulate or surpass intelligent human behaviour. But it’s come to be used for a wide range of smart technologies.
“AI ethics is an umbrella term that’s come to encompass all the different ways in which AI can create harm,” said Katrina Ingram, the CEO of Ethically Aligned AI, who helped develop the course.
“It’s essentially about smart autonomous machines that are able to achieve goals,” Ingram said.
Privacy is a key issue when it comes to such technologies — such as how a photo posted on Facebook can find its way into another company’s hands.
“When it comes to how these AI systems are creating a culture of surveillance, that’s really concerning,” Ingram said.
Beyond data collection, there are also issues around how algorithms are operating and even reinforcing power imbalances or inequities.
Facial recognition technology — often used in law enforcement — is a particularly infamous example. Researchers have shown the technology does not work well with people of colour.
Nidhi Hegde, an associate professor in computing science at the University of Alberta and fellow with the Alberta Machine Intelligence Institute, said machine learning and algorithms can be used for a wide array of applications — from determining movie recommendations to sorting through job applications.
In assembling various pieces of data from a person’s background, however, these technologies may be picking up on correlations and reinforcing historical inequalities, she said.
They may make decisions “based on features that don’t really reflect whether this person is suitable for the job or not — like race, or age or gender,” she said.
Hegde said it is not as simple as removing those items from the data set.
“You could still have unfairness issues because there might be historical reasons for which certain races, genders or ages are lumped into a certain category.”
As AI has been applied more and more to daily life, Hegde said there’s been an increasing concern for the way it is impacting society.
“Definitely social media is talking about this a lot more, people are more vocal about it. People working in this domain are more vocal about it,” she said.
“So there is more and more awareness.”
Ingram said the Athabasca program can appeal to a general audience but is also valuable for developers designing these systems.
She said the issues at the heart of the practice are expected to become even more pertinent as these technologies proliferate.
“Every company is a technology company to some degree,” Ingram said.
“And so I think every organization to some degree is going to encounter these AI technologies and these AI ethics issues.”
View original article here Source