Subscribe Us

After ChatGPT, US Congress Prohibits Staff Members from Using Microsoft’s AI Tool Copilot

After OpenAI’s ChatGPT, the US Congress has restricted its employees from using Microsoft’s Copilot AI. According to a report by Axios, this decision is due to significant security concerns. Staff members will no longer be able to use Copilot on their government-issued devices, as stated in a memo from House Chief Administrative Officer Catherine Szpindor.

The memo cited concerns about the potential risk of data leaks to unauthorized cloud services, raised by the Office of Cybersecurity. However, employees can still use Copilot AI on their personal devices.

A Microsoft spokesperson informed Reuters, “We recognize that government users have higher security requirements for data. That’s why we announced a roadmap of Microsoft AI tools, like Copilot, that meet federal government security and compliance requirements that we intend to deliver later this year.”

House Chief Administrative Chief Catherine L. Szpindor stated in the memo that lawmakers and staff are now restricted to using ChatGPT Plus, the paid version of OpenAI’s AI chatbot, due to its improved privacy features. Offices can only use the product for “research and evaluation” with privacy settings enabled. Staff members are prohibited from pasting “any blocks of text that have not already been made public” into the service.

The memo also stated, “No other versions of ChatGPT or other large language models AI software are authorized for use in the House currently.”

Last year, two Democratic and two Republican US senators introduced legislation to ban the use of artificial intelligence that creates content falsely depicting candidates in political advertisements to influence federal elections. According to the report, Szpindor’s office will evaluate the government version of Copilot after its release to analyze its suitability for use on House devices.

Microsoft had previously announced plans to introduce several tools and services for the government, including Azure OpenAI services for classified workloads and an improved version of Microsoft 365’s Copilot assistant, to enhance the security of government’s sensitive data.

Not only the government, but several tech companies like Samsung and Apple have also restricted their employees from using generative AI tools like ChatGPT, citing concerns about the security of sensitive data. This comes after several OpenAI privacy blunders have already taken place in these companies. For example, a ChatGPT bug leaked chat histories of the users on the platform. Later, it was revealed that the issue was caused by “a bug in an open source library”.

Post a Comment

0 Comments