
1.WHAT IS “RESPONSIBLE AI”?
A “Responsible” ai can prevent children from viewing both harmful and legal but harmful content online. this new research presents a framework for using so-called “Responsible” ai to assist in content moderation.
2.WHY RESPONSIBLE AI IS IMPORTANT?
The proposed system sifts through vast amounts of language and images to identify each of the most harmful domains that threaten children and young people, including hate speech, cyberbullying, suicide, anorexia, child violence, and child sexual abuse. you can create a “Dictionary” of insights about child .
3.How it is worked ?
The system uses natural language processing algorithms with a knowledge layer that enables technology to understand language like humans do. this means technology that can understand the context of comments, the nuances of language, and the social ties of age and relationships.
If regulators understood what was possible — “smart” technology that read between the lines and distinguished between benign and malicious communications — they could demand their presence in the laws they wanted to pass.