Meta has dismantled its team devoted to developing ethical artificial intelligence, according to an internal memo leaked recently. The Responsible AI group, formed in 2019 to ensure fair and bias-free model training, has been redirected to focus on expanding the company's generative AI capabilities instead.
Most of the specialized unit will now work on creating new generative AI networks. The move raises red flags around Meta's professed commitment to AI accountability. Despite public pledges to further AI for social good, past reporting found the Responsible AI group functionally toothless inside the organization. Their oversight powers proved limited, and they were ragged by excessive bureaucracy.
With regulators still formulating comprehensive policies to govern AI development, Meta deprioritizing its safeguards team signals a blow for promoting the ethical application of artificial intelligence in the tech sector. With few meaningful guardrails in place, repositioning staff talent towards unattended growth of generative models risks exacerbating the spread of synthetic media and automated disinformation campaigns. Nonetheless, a Meta spokesperson told Business Insider the move was to "bring the staff closer to the development of core products and technologies."
Industry leaders have warned expanded creation of deepfake imagery poses immediate threats to truth and privacy absent countermeasures. Unfortunately, Meta's latest reorganization implies ambitions around innovation and profits may supersede caution inside the AI pioneer for the foreseeable future.