APPG Makes Five Recommendations for Meeting AI Challenges

In the AI based workplace monitoring, here are the recommendations to protect us

AI

AI

Nowadays everything is monitored with the rise of motion sensors, analytics, activity monitor, and the use of AI in the workplace. AI systems do more than just count time. They apply algorithms to draw conclusions by themselves. 

As workplaces have been designed in great ways to monitor employees. A committee of MPs and peers says that this reason serves for coming up with strict rules of usage of algorithms to look after the employees in the workspace as well as in the role of decision making after a huge rise in workplace monitoring. 

The UK’s All-Party Parliamentary Group or APPG on the Future of Work cautioned that the growing reliance on algorithmic surveillance and management tools is also linked with significant negative impacts on the conditions and quality of work across the country. 

The committee found that monitoring and automated decision making were associated with ‘pronounced negative impacts on mental and physical wellbeing, as workers experience the extreme pressure of constant real-time micromanagement and automated assessment’. 

They came up with the report called “The New Frontier: Artificial Intelligence at Work” published by the European Commission’s Joint Research on electronic monitoring and surveillance in the workplace. The report found the explosive growth of AI based tools have attached risk to worker’s wellbeing too, threatening to erode-trust between employer and employees that can risk the psycho-social consequences unless action is taken to regulate its use. 

The report reads as “AI is transforming work and working lives across the country in ways that have plainly outpaced, or avoided, the existing regimes for regulation. With increasing reliance on technology to drive economic recovery at home, and provide a leadership role abroad, it is clear that the government must bring forward robot proposals for AI regulation to meet these challenges”. 

For these APPG makes five recommendations aimed at ensuring transparency and fairness in the UK’s AI ecosystem. 

 

Recommendation 1: The Accountability for Algorithms Act 

Recommendation of creating an Accountability for Algorithms Act or AAA. This would take into account the new rights and responsibilities to ensure that all vital impacts from algorithmic decision-making on work or workers is considered. AAA came up with four planks: 

  • Identifying individuals and communications who might be impacted by alogothimc decisions 
  • Undertaking risk analysis aimed at outlining potential pre-emptive actions 
  • Taking appropriate action in response to any analysis undertaken 
  • Ongoing impact assessment and appropriate responsive action 

 

Recommendation 2: Updating the Digital Protection 

AAA should fill up the gaps in the already existing protections against technology at work taking into consideration workers with easy-to-access information detailing the outcomes and the right for them to shape their design and use. The report calls for greater protection for employees who might be subject to monitoring. 

 

Recommendation 3: Enable a Partnership Approach

To ensure that the AI tools are designed considering the wider public interest and the government should develop partnerships with developers and wider AI ecosystems. When we look at the unions and NGOs should also be given additional rights when it comes ro requesting transparency and involvement regarding how algorithms are used in the work area. The report calls for unions to be integrated into the AI ecosystem even further. “Unions should also be allowed to develop new roles within the AI ecosystem to redress a growing imbalance of information and power and help deliver genuinely human-centred AI in the public interest,” it reads. It proposes that the UK Trade Union Congress (TUC) be given the role of developing and delivering artificial intelligence training to workers.

 

Recommendation 4: Enforcement in Practice 

It argues that, while the DRCF was created to ensure greater cooperation on digital and online regulation, there is still “a very mixed picture” when it comes to who is responsible for what. This makes transparency an issue when it comes to trying to figure out which body is accountable when it comes to upholding workers’ digital rights, particularly as new technologies evolve rapidly.

 

Recommendation 5: Supporting Human Centred AI 

The APPG’s fifth recommendation proposes that the UK incorporates a collection of fundamental rights and values into the development and application of new AI and automation-based technologies in the workplace. “A sharper focus on Good Work for all will enable the development of human-centred AI and a human-centred AI ecosystem,” the report reads.