Regulatory authorities in Indonesia and Malaysia have officially blocked Grok, the generative AI chatbot from Elon Musk, following a series of reports regarding the generation of sexually explicit and non-consensual deepfake images of citizens, especially of women and children.
Minister Meutya Hafid of the Communication and Digital Affairs of Indonesia explained that the ban is necessary for public safety. She noted that the platform failed to prevent the creation of harmful content that violates national laws regarding dignity and human rights.
The Communications and Multimedia Commission of Malaysia has the same concerns. It claimed that the safeguards of the controversial chatbot were insufficient to stop users from generating manipulated and sexually explicit images of real women and children.
Note that the controversy erupted after users of X, formerly Twitter, were engaged in the digital manipulation of real people using the image generation tool in the built-in Grok AI chatbot. The generative tool was used to remove clothing from uploaded photos of real people.
Both Indonesia and Malaysia are Muslim-majority countries with strict laws on pornography and obscenity. This cultural and legal framework made them more likely to take swift regulatory action compared to Western nations, where the debate factors in free speech.
The company xAI recently limited image generation capabilities to only paying subscribers after the global backlash. It suggested that requiring payment information would deter bad actors. But regulators argue this measure is too weak to stop determined users.
Other countries have also taken action. In the United Kingdom, the Office of Communications, a regulatory and competition authority, launched a formal investigation into whether the outputs from Grok comply with the stringent requirements of the new Online Safety Act.
India has also issued formal notices to X Corp. It has demanded the immediate removal of all explicit AI-generated content. European Union officials are scrutinizing adherence to digital safety standards, suggesting the possibilities of wider regional bans.
FURTHER READINGS AND REFERENCES
