The growth of Artificial Intelligence, including the use of algorithms and automated management systems, poses unique opportunities and challenges in the workplace. It also carries the risk of increasing inequities and prejudicial outcomes for workers, from hiring decisions to scheduling, disparities in pay, promotions, demotions, and termination.
Some of these systems pose particular risks for women and workers of color, by potentially embedding systemic biases, and also because these workers are more likely to be working in sectors utilizing this technology. Workers are also affected by AI systems when accessing benefits or services, such as unemployment insurance. This growth of AI in the employment and benefits space should capitalize on opportunities to enhance decision-making and efficiency, but should also protect workers’ privacy, safety, and rights.
Workers are coping with unprecedented and enhanced forms of electronic monitoring and productivity tracking, according to Labor Department stakeholders. These technologies, they tell us, are negatively affecting their workplace conditions and health and safety, including their mental health. For instance, call center agents, who are often electronically monitored and held to similarly intensive productivity standards as warehouse workers, report high levels of stress, difficulties sleeping, and repetitive stress injuries. In addition, constant monitoring may discourage workers from engaging in legally protected activities, including taking action with other co-workers to improve working conditions, organizing and collectively bargaining, or filing complaints about violations of labor and employment rights laws with government agencies.
Even when workers know data about them is being collected and used to monitor their performance and provide valuable information to their employer, they don’t control or own this data. In addition, a lack of human oversight in automated systems may mean an inability to correct or appeal adverse employment decisions or benefits determinations.
Monitoring practices have become more common with increased remote work. This is particularly true for occupations which require large amounts of computer work. For instance, some companies require workers to install facial or eye recognition systems that scan workers’ faces at regular intervals to verify their identity and ensure that they are in front of their computer and on task. If workers look away from their screens for too long the system would register them as no longer at work.
The administration’s Blueprint for an AI Bill of Rights, which followed extensive collaboration with stakeholders and across the federal government, acknowledges and notes steps employers can take to mitigate these potential harms. The Blueprint shares five aspirational and interrelated principles for building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties and privacy:
-
Safe and Effective Systems: You should be protected from unsafe or ineffective systems. This includes protecting you from foreseeable harms from uses or impacts of automated systems.
-
Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
-
Data Privacy: You should be protected from abusive data practices with built-in protections and you should have choices over how data about you is used.
-
Notice and Explanation: You should know an automated system is being used and understand how it can impact you.
-
Human Alternatives, Consideration and Fallback: You should have access to appropriate human alternatives and other remedies for systems resulting in discrimination or other harms.
The Blueprint for an AI Bill of Rights also includes a technical companion with concrete steps that can be taken now to integrate these principles into the use of AI. Many of these are topics the Department of Labor is already exploring. For example, the Office of Federal Contract Compliance Programs and the Equal Employment Opportunity Commission launched a multiyear collaborative effort to reimagine hiring and recruitment practices, including in the use of automated systems. In a recent roundtable, speakers explored potential barriers automated technologies present to diversity, equity, inclusion, and accessibility.
The Office of Labor Management Standards is ramping up enforcement of required surveillance reporting to protect worker organizing. The Partnership on Employment & Accessible Technology (PEAT), funded by the Office of Disability Employment Policy at the Department of Labor, has released the AI & Disability Inclusion Toolkit and the Equitable AI Playbook.
Businesses will increasingly look to adopt AI tools that automate decision-making and promote productivity, efficiency, customer satisfaction, and worker safety, among other goals. This Blueprint provides the framework to ensure these tools are safe and effective, do not have unintended consequences, and are not used to threaten workers’ access to a healthy and safe workplace, collective action and labor representation, and a workplace free from discrimination. Ensuring worker input and voice are included in the design and deployment of such AI is critical to enhancing its value in the workplace.
Tanya Goldman is a senior counselor to the secretary of labor.