Volker Türk on AI: Key points
- Human rights provide a framework for the safe development, design, and use of AI
- AI’s impact is ultimately about power and how it is exercised
- Inequity, bias and discrimination are the biggest human-rights risk from rapid AI expansion
- Human rights impact assessments are needed during the design, development and deployment of AI
- Tech companies must follow the UN’s business and human rights principles
- When AI developers lack a deep understanding of fundamental social and ethical principles they risk creating “Frankenstein’s monster.”
Speaking at the AI Impact Summit in New Delhi, Mr. Türk told UN News that the technology must be governed through a human rights framework that ensures transparency, accountability and inclusion. This interview has been edited for clarity and length. Volker Türk: Artificial intelligence is a technological tool and it needs to be developed on the basis of risk assessments. Technological tools are used to exercise power, for good but also for bad, so we need to make sure that there is a framework within which they are developed, designed and used, and that’s where human rights come in.
UN News: What are the biggest human rights risks that you see from rapid AI expansion today?
Volker Türk: There is a huge issue of inequity, and that’s why I’m so happy that this AI summit is taking place in India. It’s really important that these tools are used everywhere and that they are developed everywhere.
Then there’s the issue of bias and discrimination. If the data are only collected from one part of the world, if only men are developing AI, then unconscious bias will be built in. We believe that it’s key to be mindful of vulnerable groups and minorities because they are often excluded from AI development. It’s about meaningful participation and giving a vision of a better world. Human rights provide that vision.
UN News: Generative AI is moving faster than regulation. What guardrails must governments and companies put in place as a matter of urgency?
Volker Türk: Take the pharmaceutical industry as an example: testing can sometimes last for a long time because you need to make sure that any risks associated with a new product are identified before it goes on sale. When it comes to AI tools, we need to demand that companies do a human rights impact assessment when they design, roll out and market them.
We have seen for quite some time now that some companies have bigger budgets than some smaller countries. If you are able to control technology not just in your country but around the world, you exercise power. You can use the power for good – to do things that hopefully help in areas such as health, education and sustainable development – but you can also use that power for bad things, such as automated lethal weapons, and spreading disinformation, hate and violent misogyny.
Source: The UN
The post UN rights chief: AI must be based on inclusivity, accountability and global standards appeared first on Vastuullisuusuutiset.fi.
