Technological innovations are transforming workplaces across Canada and around the world. Digital systems, new technologies and artificial intelligence (AI) are spreading quickly throughout our society, including in public services and public sector workplaces. The use of these systems and tools is expected to increase rapidly.

The impacts of AI systems and other digital technologies depend on how they are used. These technologies have the potential to improve workers’ lives. When controlled by people, AI systems could help workers deliver quality public services.

Left unchecked, AI and other digital innovations can have harmful consequences for work quality and workers’ rights. These technologies can enable surveillance, facilitate unjust hiring and firing decisions, increase the pace of work, and expose workers to new occupational risks. Some AI systems have also been found to discriminate against equity-deserving groups, placing women, Indigenous, Black, racialized, precarious and migrant workers at risk.

Workers have the power to control and shape how AI is used. We need to make sure our collective agreements and the laws governing these technologies protect workers and public services. Our jobs, our rights, and the services we provide are at stake. This guide will help CUPE members understand and navigate this wave of technological change.

Download a printable copy

Tech change is a union issue

AI systems are already in many CUPE workplaces and will affect every sector in our union. This is why we must ensure workers have a voice in how these digital systems are introduced and used in our workplaces. CUPE is tackling AI like any new technology, by focusing on how it affects our members’ jobs and rights, and the public services they deliver.

Our union is developing tools to help members:

  • understand AI,
  • spot the use of new technologies in our workplaces,
  • be on guard against the risks of AI-powered human resources systems, and
  • strengthen our collective agreements to protect our rights and defend public services.

Governments have been slow to regulate these new technologies, and it is clear workers must use our collective power to build the future that we want.

How AI works

Artificial intelligence is a term describing machine-based systems that use data and computer programming code to complete a task or solve a problem. AI systems are diverse and the level of human involvement varies. Some systems have human oversight while others, like machine learning systems, can operate with high levels of autonomy. Unlike earlier forms of automation where robots and machines were programmed to perform specific tasks, some AI systems can learn and adapt over time, on their own. This means they can operate beyond preset instructions and can evolve without human intervention.

AI systems rely on an important resource: data. These systems use complex sets of computer programming rules and commands, known as algorithms to do many things including classify data, identify patterns and relationships in the data, and generate predictions. Algorithms are used to identify objects, read and summarize text, answer questions, make recommendations, or generate new content (for example text, images, music, or videos).

Data to Algorithm to Output

 

Workers’ data powers digital systems

Our world is increasingly digitized, and the volume of data being collected in our everyday lives has exploded. We are constantly generating data that can feed AI and algorithmic systems. This can be the steps we take, the time we spend on our cell phones, or how often and where we swipe our key cards at work.

As workers, our data powers AI. Data can be text, numbers, images, video or audio. This data is found in many day-to-day work activities including:

  • our time logs;
  • how many tasks we complete during a shift;
  • the content of our emails;
  • our vehicle location;
  • the conversations we have on customer service calls; and
  • our physical movements in the workplace.

Our data is also extracted through constant monitoring in our private lives, like our purchase history, browsing history and social media posts. When our data is fed into AI systems, employers and the private corporations that develop and use these systems take control of this valuable resource, including how data is stored and used.

Workers are also teaching AI systems how we do our jobs. In one case, a CUPE member working in closed captioning was unknowingly training an AI system by correcting its mistakes. Chatbots are being trained on workers’ call recordings and emails to mimic customer service support. This could lead to job cuts and layoffs in the future.

Government regulation is not keeping up with the introduction of AI systems and other digital innovations in our workplaces. We must use our power as workers to regulate AI systems now with the tool we know best, our collective agreements.

Workers' data powers AI

 

AI systems in our workplaces

More and more of our employers are buying AI-powered chatbot systems from tech companies to do the work of customer service representatives. AI is being used to automate duties like handling inquiries, helping residents access public services or social supports, and scheduling appointments with care providers. Social service providers are using AI technologies to prioritize cases and make decisions about income supports and other resources.    

Employers are making AI a high priority, creating new departments and executive management roles focused on the technology. Governments are encouraging the rapid spread of AI with subsidies for companies or municipalities to use these systems.

This new technology has opportunities and risks. AI has enticing promises. It can perform tasks that are dangerous or difficult for humans. It can also automate basic work functions, allowing workers to focus on more fulfilling tasks. AI systems could also broaden access to public services, for example through text to speech capabilities for persons with disabilities.

At the same time, AI comes with many dangers. Its risks include deskilling jobs and reducing job satisfaction. AI is not a substitute for human contact with people who depend on public services. AI technology like a chatbot takes away person-to-person interaction, human oversight, and discretion, which are the heart of responsive public services. It can harm public services by reducing accountability and responsibility, and eroding human autonomy and connection. Ultimately, how AI systems are designed and used will determine whether these technologies benefit or harm society.

AI systems are only as good as their programming and the data they are fed. Inaccurate or incomplete data, or mistakes in programming code, can lead to wrong predictions, unreliable recommendations and poor-quality outputs. Human biases and discrimination can be reflected in data that’s used in AI systems. For example, if the data used to train AI systems includes biases towards workers and service users with similar characteristics such as language, dialect, gender, or disability status, the decisions and content that is generated can be discriminatory. This is most likely to negatively affect people who are part of equity-deserving communities. This is why CUPE is calling for AI systems to come with strong public oversight that requires transparency and strict accountability measures.

The outcomes of AI depend on how and why it is used, and what it’s designed to do. For example, if employers use AI to reduce labour costs, instead of to improve public service quality or access, AI could affect our members’ standard of living or job security. When AI use eliminates job duties, lowers educational requirements, or automates decision-making processes, it can lead to lower wages, job loss and more precarious work–all with an outsized impact on equity-deserving workers.

Finally, a handful of giant for-profit corporations dominate AI design and development. This will make the public sector increasingly dependent on the private sector. It also raises serious questions about public control of, and private sector access to, vital personal data.

 

Managed by an algorithm

AI is being used to automate human resources functions and remotely manage workers. Employers are buying algorithmic management systems from third-party developers to:

  • Screen job applications
  • Conduct interviews
  • Rank and recommend candidates
  • Assign tasks and allocate work
  • Set schedules and staffing levels
  • Evaluate performance or productivity
  • Assess training needs
  • Trigger discipline

Algorithmic management systems are trained using workers’ data that is collected through monitoring and surveillance. Data is then processed through algorithms which produce recommendations for the employer. These recommendations can include which candidate should be hired, which workers should be given disciplinary warnings, and how working hours or workflow should change.

Municipalities are using AI technologies to set solid waste collection and snow removal schedules and routes. Past vehicle movements, route completion times, fuel consumption, braking and acceleration habits, speed, resident complaints, along with real-time traffic and weather data, are fed into an algorithm that sets optimal routes for drivers.

Algorithmic management systems can recommend performance targets that can intensify workloads, put safety at risk or be outright unreasonable. These systems are unable to consider surrounding circumstances, including human limits, which can lead to unfair conclusions about productivity.

 

A collectively bargained requirement to disclose key information helps ensure employers are transparent with workers about when and how AI is being used to make important decisions about us. Employers must make sure workers understand algorithmic management systems. Too often employers do not understand the algorithmic “black box,” or formula, that drives the computer program. Employers may also try to dodge accountability for errors or glitches in algorithmic management systems.

AI-driven human resources systems are trained on historical data which can reinforce and magnify existing workplace inequities. For example, recruitment algorithms used to hire workers may use voice recognition software. When these systems are trained on data generated by speakers with certain accents, they could contribute to inequity in hiring by less accurately processing data from speakers who are not using their native language.

Employers can also use workplace data to interfere with union activities. For example, constant data collection and monitoring embedded in algorithmic management systems may help employers to identify or interfere with union activities like membership drives or mobilizing for a strike vote.

Protecting jobs and services

Working people will only reap the benefits of AI if workers and their unions have strong legal and contract protections, including a meaningful voice in how the technology is used at work.

Unions should bargain everything relating to the terms, conditions and security of employment likely to be affected by AI use. CUPE is developing collective agreement language to protect our members’ rights and job security. There is no single “AI clause.” Locals must adopt a whole contract approach to address AI’s opportunities and challenges for workers and public services.

Our union is also advocating for better legislative protections at all levels of government, including important amendments to Bill C-27, enacting the federal Artificial Intelligence and Data Act.

Federal legislation governing AI must apply to all government bodies and crown corporations. Any law must consider the potential collective and societal harms of AI systems. An arm’s-length commissioner should enforce the law, not a government insider. Lastly, legislation should safeguard workers’ rights, mandating that information be provided and negotiations take place with unions before introducing AI into a workplace.