• Question: How, as a society, do we decide the limits of the control of AI?

    Asked by anon-256449 on 9 Jul 2020.
    • Photo: Kim Liu

      Kim Liu answered on 9 Jul 2020:


      Great question 🙂 I think we should, as with any scientific problem, rely on dedicated teams of scientists/ethicists/philosophers amongst university, industries and governments who research AI ethics. I’ve seen job adverts for ‘Ethics scientists/researcher’ for companies like DeepMind (Google’s AI branch) and The Alan Turing Institute – I’m pleased that these jobs exist, and they look really interesting! A great resource for learning about solutions to ethical questions like this is 80,000 hours (https://80000hours.org/topic/priority-paths/ai-policy/) – this website considers the most meaningful work that can be done by a human today from an effective altruism utilitarian perspective – it’s really cool for learning about these careers ~

    • Photo: Nicole Wheeler

      Nicole Wheeler answered on 9 Jul 2020:


      Great question, Ella! This is a tricky issue at the moment and is currently being discussed by a lot of experts. Most AI developers don’t aim to make dangerous or unfair AI, but that doesn’t stop this sort of AI from being made accidentally. Part of the effort to make AI safer and more fair is building ethics components into educational programs that train the developers of AI. Another issue is diversity in AI developers – you’re much more likely to become a developer if you fit in with the predominantly white male demographic that makes up existing developers in a lot of big companies, and if you’re wealthy enough to access the education and computational resources you need to become established in this area.

      Part of the work I’m doing at the moment is teaching high school students to build their own AI, introducing ways in which algorithms can be biased by decisions made by the developer, bias in the underlying data, or field-specific challenges the developer may not be aware of. The aim is to give the general public an intuition of how AI ethics is developing and enabling them to have an influence as voters and consumers, to expand the type of person who gets to have a say on how AI should be regulated.

    • Photo: Alan Winfield

      Alan Winfield answered on 10 Jul 2020:


      Great question. Many people – including myself – worry about this a lot. My view is that AI needs to be regulated, by law. But many governments are reluctant to introduce regulation because they think that regulation hinders innovation. In fact this is not true – think about aviation – it is an industry that is very strongly regulated yet there is still loads of innovation.

      Regulation typically relies on standards. Standards are *really* important – they are agreed upon way of doing things. There are thousands of standards on everything from the safety of your toothpaste, to how WiFi works. Fortunately people are already developing new ethical standards in AI. In case you are interested in finding out more about these new ethical standards check out my blog post here https://alanwinfield.blogspot.com/2019/07/ethical-standards-in-robotics-and-ai.html

Comments