White House questions impact of AI surveillance on workers

Officials in Washington D.C. said they will hold a listening session to understand the experiences of workers and the usage of AI surveillance in the workplace.

Officials in the United States are making efforts to keep tabs on the development of artificial intelligence (AI), as new plans surface to examine workers’ experience with AI surveillance. 

According to a Reuters report, officials at the White House said on May 23 that they would be asking workers how their employers use AI for monitoring purposes. This comes as federal investments are being allocated toward the development of the technology.

Regulators in the U.S. are planning to hold a listening session to hear out such experiences with AI for workplace surveillance, monitoring and evaluation. Also on the call will be gig work experts, researchers, and policymakers

The forthcoming listening session comes only a few weeks after U.S. Vice President Kamala Harris invited executives from major tech companies to the White House to discuss the dangers of AI. 

In attendance were nine of the top advisers to the Biden administration in science, national security, policy and economics, along with the CEOs of OpenAI, Microsoft and Meta CEO Mark Zuckerberg, among others.

Prior to the meeting, U.S. President Joe Biden addressed tech companies imploring them to address the risks of the technology. 

Related: AI-generated image of Pentagon explosion causes stock market stutter

On May 4, U.S. officials released standards for key and emerging technologies, which identified eight sectors within the tech industry that could have a significant impact on the economy in upcoming years. 

Most recently, Sam Altman, the CEO of OpenAi which created ChatGPT, testified before Congress in a “historic” session that focused on the potential threats posed by generative AI.

The U.S. is not alone in forming a regulatory stance on emerging technology. Regulators in the United Kingdom recently pledged nearly $125M towards the creation of a ‘safe AI’ task force while the country focuses on AI “readiness.”

Meanwhile, in the European Union, officials are in the process of finalizing legislation that could be one of the world’s first set of legal measures and guidelines regulating generative AI tools. The most recent round of deliberations for the EU AI Act included a ban on facial recognition in public spaces and predictive policing tools.

Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?

Read Entire Article


Add a comment