[ad_1]

The documents about the internal functioning of Facebook, now Meta, from the cache accessed by whistleblower Frances Haugen, submitted to the US Congress and examined by The Indian Express, outline two sets of voices within the company. On one side, are staff memos and internal reports that flag issues such as misinformation, particularly about minorities, by politicians during the 2019 Lok Sabha election campaign, hate speech, and posts that could be seen as incitements to violence. On the other side, is Meta’s leadership seemingly either brushing these concerns aside or insisting that it has done enough to deal with them. Given the clear social and political harm caused through and, perhaps, by Meta and its products, it is unfortunate that the leadership seems to win the day more often than not.

Between 2018 and 2020, staff memos highlighted a “constant barrage of nationalistic content”, “misinformation” and content denigrating minorities in India. In West Bengal, as many as 40 per cent of posts actually viewed by users were found to be “inauthentic”. After the Pulwama terror attack in February 2019, a test account (which followed no pages, had no “friends”) was inundated with content around nationalism and the military. Yet, a review meeting with Chris Cox, then vice president, Facebook, concluded in essence that there was no reason to worry. However, as recently as the 2021 Assam assembly election campaign, Himanta Biswa Sarma — who subsequently became chief minister — was flagged for being involved in spreading rumours about “Muslims pursuing biological attacks against Assamese people by using chemical fertilisers to produce liver, kidney and heart disease in Assamese”. He is hardly the only politician or political outfit to be marked for such behaviour: Accounts linked to the BJP, RSS and Trinamool Congress too were similarly flagged. Yet, as hate content spiked in India, Meta shrunk the budget for its review team.

A host of technical, managerial and political reasons can account for the egregious inaction in Facebook’s biggest market: The company lacks the capacity or has not invested in AI and manpower to tackle misinformation in “vernacular” languages; Meta continues to take its ethical responsibility in Western markets more seriously; India lacks both the regulation and political will to clamp down on polarising and false political speech. However, the fundamental problem highlighted by Haugen’s revelations is that Meta continues to see its prime mandate as maximising views and screentime. For it, concern for the social and political consequences of this drive seems secondary. Meta’s AI needs to factor in ethics and social impact in its operations. Having said that, a mature constitutional democracy cannot merely place the blame for the erosion in the standards of its political and public conversation on the algorithms of a multinational company. Political parties, so-called cultural organisations, and those who seek and hold constitutional office must not use social media platforms for political gain through polarisation, because it takes a larger toll. Ahead of a crucial round of assembly elections, this is an important note of caution.

[ad_2]