The report, which covers due diligence performed in 2020 and 2021, includes a summary of a controversial human rights impact assessment of India that Meta commissioned law firm Foley Hoag to conduct.
Human rights groups including Amnesty International and Human Rights Watch have demanded the release of the India assessment in full, accusing Meta of stalling in a joint letter sent in January.
In its summary, Meta said the law firm had noted the potential for "salient human rights risks" involving Meta's platforms, including "advocacy of hatred that incites hostility, discrimination, or violence."
The assessment, it added, did not probe "accusations of bias in content moderation."
Ratik Asokan, a representative from India Civil Watch International who participated in the assessment and later organized the joint letter, told Reuters the summary struck him as an attempt by Meta to "whitewash" the firm's findings.
"It's as clear evidence as you can get that they're very uncomfortable with the information that's in that report," he said. "At least show the courage to release the executive summary so we can see what the independent law firm has said."
Human Rights Watch researcher Deborah Brown likewise called the summary "selective" and said it "brings us no closer" to understanding the company's role in the spread of hate speech in India or commitments it will make to address the issue.
Rights groups for years have raised alarms about anti-Muslim hate speech stoking tensions in India, Meta's largest market globally by number of users.
Meta's top public policy executive in India stepped down in 2020 following a Wall Street Journal report that she opposed applying the company's rules to Hindu nationalist figures flagged internally for promoting violence.
In its report, Meta said it was studying the India recommendations, but did not commit to implementing them as it did with other rights assessments.
Asked about the difference, Meta Human Rights Director Miranda Sissons pointed to United Nations guidelines cautioning against risks to "affected stakeholders, personnel or to legitimate requirements of commercial confidentiality."
"The format of the reporting can be influenced by a variety of factors, including security reasons," Sissons told Reuters.
Sissons, who joined Meta in 2019, said her team is now comprised of eight people, while about 100 others work on human rights with related teams.
In addition to country-level assessments, the report outlined her team's work on Meta's COVID-19 response and Ray-Ban Stories smart glasses, which involved flagging possible privacy risks and effects on vulnerable groups.
Sissons said analysis of augmented and virtual reality technologies, which Meta has prioritized with its bet on the "metaverse," is largely taking place this year and would be discussed in subsequent reports.
from latest-news - SUCH TV https://ift.tt/6UgdaL1
0 تبصرے