Algorithms are making increasingly high-stakes decisions about our lives in the workplace, and yet current laws are failing to protect workers from the risk of unfair treatment and discrimination that this might cause. 

The UK’s national trade union federation, the Trades Union Congress (TUC), has warned of “huge gaps” in law over the deployment of AI at work, and called for urgent legal reform to prevent algorithms from causing widespread harm to employees

This includes, among other recommendations, mechanisms to ensure that workers know exactly what technologies are being used that are likely to affect them, as well as implementing a right to challenge any decisions made by an AI system that is deemed unfair or discriminatory.  

SEE: Virtual hiring tips for job seekers and recruiters (free PDF) (TechRepublic)

The TUC’s manifesto was published alongside a report carried out together with employment rights lawyers, which concluded that employment law is not keeping pace with the rapid expansion of AI at work. 

Some uses of AI are harmless and even enhance productivity: for example, it is difficult to object to an app that predicts faster routes between two stops for a delivery driver. But in many cases, algorithms are used to drive important, sometimes life-changing decisions about employees – and yet even in these scenarios, regulation and oversight are lacking, said the TUC. 

“There are all sorts of situations with potential for unfairness,” Tim Sharp, the TUC’s employment rights lead, told ZDNet. “The spread of technology needs to come with the right set of rules in place to make sure that it is beneficial to work and to minimize the risk of unfair treatment.” 

Algorithms can currently be tasked with making or informing decisions that are deemed “high-risk” by the TUC; for example, AI models can be used to determine which employees should be made redundant. Automated absence management systems were also flagged, based on examples of the technology wrongfully concluding that employees were absent from work, which incorrectly triggered performance processes. 

One of the most compelling examples is that of AI tools being used in the first stages of the hiring process for new jobs, where algorithms can be used to scrape CVs for key information and sometimes undertake background checks to analyze candidate data. In a telling illustration of how things might go wrong, Amazon’s attempts to deploy this type of technology were scrapped after it was found that the model discriminated against women’s CVs. 

According to the TUC, if left unchecked AI could, therefore, lead to greater discrimination in high-impact decisions, such as hiring and firing employees. 

The issue is not UK-specific. Across the Atlantic, Julia Stoyanovich, professor at NYU and founding director of the Center for Responsible AI, has been calling for more stringent oversight of AI models in hiring processes for many years. “We shouldn’t be investing in the development of these tools right now. We should be investing in how we oversee those technologies,” she told ZDNet. 

“I do hope we start putting some brakes on this, because hiring is an extremely important domain, and we are not in an environment in which we are using these tools in alignment with our value system. It seems that now is the right time to ramp up regulatory efforts.” 

SEE: The algorithms are watching us, but who is watching the algorithms?

For Stoyanovich, a mixture of more transparency and a stronger recourse procedure is necessary to protect those who are affected by algorithmic decisions. In other words, individuals should know when and how an AI model is used to make decisions about them, and they should have the means to ask for a human review of that decision. 

The proposal comes close to the recommendations laid out in the TUC’s latest manifesto. A prevalent issue, in effect, is the lack of awareness among workers that AI tools are being used by employers in the first place – not only in the hiring process, but also in the workplace.  

The trend has only been aggravated by the rapid digitization of the workplace caused by the COVID-19 pandemic. As staff turned to remote working, some employers have put in place new monitoring technologies to keep an eye, even from home, on their employees’ productivity. Recent reports show that as many as one in five businesses are now tracking employees online via digital surveillance tools, or have plans to introduce the technology. 

AI systems can be used to log the hours worked by staff, the number of keyboard strikes made in an hour, social media activities and even photographic “timecards” taken via a webcam.  

Yet it would seem that most workers don’t realize that they are being monitored: previous research from the TUC showed that fewer than one in three employees (31%) are consulted when any new form of technology is introduced. In contrast, an overwhelming 75% of respondents felt that there should be a legal requirement to consult staff before any form of workplace monitoring is deployed. 

“A lot of workers are not certain of which data is collected and what use it’s being put to,” says Sharp. “We are worried about what this means for the balance of power in the workplace. It should be clear what these uses are for the technology, and these tools should be introduced in consultation with workers. In most workplaces, this doesn’t appear to be happening.” 

SEE: First it was Agile software development, now Agile management is remaking the workplace

The TUC called for changes to the UK’s data protection laws to ensure that staff understand exactly how AI is operating at their workplace. Employers, said the organization, should keep a register containing information about every algorithmic system that is used in the workplace, and employees should have a right to ask for a personalized explanation about how the technologies work. 

Similar demands were made recently by the UK Labour Party, which alongside professional trade union Prospect has been campaigning for changes to the Code of Employment Practices published by the Information Commissioner’s Office (ICO), in a bid to update guidance on workplace regulation following the increasing use of remote-monitoring technologies. 

Europe’s human rights watchdog, the Council of Europe, for its part has recommended tougher regulations on facial recognition technology, citing specifically the risks associated with the use of digital tools to gauge worker engagement. The European institution has called for an outright ban on the technology where it poses a risk of discrimination. 



Source link