Tech giant Microsoft has reportedly laid off its entire team responsible for the ethical and sustainable outcome of its artificial intelligence development.
The recent layoff also comes amid the company’s push to expand its integration of AI tools with its Bing search engine.
Microsoft’s website notes that its Office of Responsible AI “principles into practice” the company-wide rules for responsible AI via the implementation of its governance and public policy work.
However, Microsoft has reportedly said it is increasing its overall investment in responsibility work despite the layoffs, adding that it is committed to developing AI-powered products by “investing in people, processes, and partnerships.”
While Microsoft’s statement does not clarify whether the company’s staff from the ethics team were laid off, the tech giant claims the department increased “the number of people across our product teams and within the Office of Responsible AI.”
The latest layoff comes following job cuts announced at the company to slash about five per cent of Microsoft’s workforce totaling over 10,000 employees.
The layoff affecting many in the US, UK, and India have led to a number of people with years of experience working at the company leaving.
Microsoft also recently renovated its Bing search engine, integrating it with the popular AI chatbot ChatGPT.
The move led to the search engine passing 100 million users, although this remains only a fraction of people using Google.
“This is a surprisingly notable figure, and yet we are fully aware we remain a small, low, single-digit share player. That said, it feels good to be at the dance,” Microsoft’s consumer chief marketing officer Yusuf Mehdi wrote in a blog post.
“We see this appeal of the new Bing as a validation of our view that search is due for a reinvention and of the unique value proposition of combining Search and Answers and Chat and Creation in one experience,” he said last week.
With the layoff of Microsoft’s ethics and society team, it remains to be seen whether the performance of the company’s AI tools might change.
In the first few days following ChatGPT’s integration with the search engine, the company faced some disruption with several users complaining of “unhinged chats” with the AI chatbot.
“Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” the chatbot asked one user.
Microsoft then rolled out restrictions to eliminate the chatbot’s behaviour, including one to limit the length of conversations users could have with it.
“We are also going to begin testing an additional option that lets you choose the tone of the Chat from more Precise – which will focus on shorter, more search-focused answers – to Balanced, to more Creative – which gives you longer and more chatty answers,” the company said.