-
Asked by anon-256449 on 9 Jul 2020.
-
Kim Liu answered on 9 Jul 2020:
Great question 🙂 I think we should, as with any scientific problem, rely on dedicated teams of scientists/ethicists/philosophers amongst university, industries and governments who research AI ethics. I’ve seen job adverts for ‘Ethics scientists/researcher’ for companies like DeepMind (Google’s AI branch) and The Alan Turing Institute – I’m pleased that these jobs exist, and they look really interesting! A great resource for learning about solutions to ethical questions like this is 80,000 hours (https://80000hours.org/topic/priority-paths/ai-policy/) – this website considers the most meaningful work that can be done by a human today from an effective altruism utilitarian perspective – it’s really cool for learning about these careers ~
-
-
Nicole Wheeler answered on 9 Jul 2020:
Great question, Ella! This is a tricky issue at the moment and is currently being discussed by a lot of experts. Most AI developers don’t aim to make dangerous or unfair AI, but that doesn’t stop this sort of AI from being made accidentally. Part of the effort to make AI safer and more fair is building ethics components into educational programs that train the developers of AI. Another issue is diversity in AI developers – you’re much more likely to become a developer if you fit in with the predominantly white male demographic that makes up existing developers in a lot of big companies, and if you’re wealthy enough to access the education and computational resources you need to become established in this area.
Part of the work I’m doing at the moment is teaching high school students to build their own AI, introducing ways in which algorithms can be biased by decisions made by the developer, bias in the underlying data, or field-specific challenges the developer may not be aware of. The aim is to give the general public an intuition of how AI ethics is developing and enabling them to have an influence as voters and consumers, to expand the type of person who gets to have a say on how AI should be regulated.
-
Alan Winfield answered on 10 Jul 2020:
Great question. Many people – including myself – worry about this a lot. My view is that AI needs to be regulated, by law. But many governments are reluctant to introduce regulation because they think that regulation hinders innovation. In fact this is not true – think about aviation – it is an industry that is very strongly regulated yet there is still loads of innovation.
Regulation typically relies on standards. Standards are *really* important – they are agreed upon way of doing things. There are thousands of standards on everything from the safety of your toothpaste, to how WiFi works. Fortunately people are already developing new ethical standards in AI. In case you are interested in finding out more about these new ethical standards check out my blog post here https://alanwinfield.blogspot.com/2019/07/ethical-standards-in-robotics-and-ai.html
Comments
anon-256449 commented on :
Thank you for answering my question!! I was wondering what other ways there are to mitigate or remove bias in AI systems?
Alan commented on :
Removing bias is AI systems is *really* hard, but one important aspect is to make sure the ‘training data sets’ used to train the AI (using machine learning) is truly representative of the diversity of the subject of the AI. So, imagine you are building an AI to recognize a dog’s breed from an image of the dog. If the training data set contains only images of terriers, then the AI will be great at recognizing terriers but useless with any other breed. So to be unbiased the AI’s training data set would need to contain representative images of *all* breeds of dogs.
Kim commented on :
I actually think it’s impossible, because no human can be completely unbiased 🙂 Ideally, you’d need an AI to make the AI, but even the previous AI will be built upon human knowledge. If we’re at the stage where AIs are making AIs making AIs, there will be other things to think about haha.
Nicole commented on :
There are a lot of approaches being developed for identifying sources of bias, which can then be addressed through data collection or tweaking how the algorithm is trained.
For example, one real-world example of a problematic bias you could have is an AI designed to detect skin cancer from photos. These types of AIs have learned to look for markings drawn onto the skin by doctors or rulers added into the photo to show how big a mole is more often in photos of cancer than photos of harmless skin lesions. One way to detect this bias is to make the AI highlight parts of an image that are driving the prediction, which would let you see that the AI’s “attention” is not on the mole, but on these features outside the mole in these cases. In this case, editing these marks out of the photos removes the bias.
Another example is bias against a certain group of people. There are now fairness auditing systems to identify this, where you include demographic features that may be a source of bias, like age, gender, ethnicity, then you test the proportion of mistakes the AI makes across these demographic groups. If there is a disproportionate amount of errors for one group of people, you could then change the way you train your algorithm, gather more representative data, or choose not to deploy that algorithm for real-world decision making. This sort of test could be made a standard screen in the approval process of AI algorithms.
A newer approach that’s being discussed but hasn’t really been put in place is creating opportunities for “red teaming”, which is where you get a group of people together who know the field (the specific problem, e.g. skin cancers, and AI) and challenge them to fool the algorithm. A similar approach is “bug bounties” which is where you offer a reward for anyone who can find a weakness in an algorithm and measure how much of a problem it is. Both of these need financial incentives to work, so would need to be supported by government or industry (or another group) to work, but would probably be the best solution to the problem.