Published:

Robert Oulds is the Executive Director of ImpACT International, a London based think tank for human rights and is concerned with policies at the intersection of states and businesses, and Director of the Bruges Group.

The dystopian future is emerging. Who is watching us behind the screen? Who has access to our data and facial images? How can we ensure that our private information is safely protected and hidden?

These are some of the questions that have dominated the minds of employees across the globe as we see a drastic rise in invasive surveillance technology in the education and employment sector.

One of the most concerning issues at hand is the use of facial recognition, which is the process of identifying and verifying a person’s identity using their digital facial profile.

According to Convention 108, also known as the Council of Europe’s Committee of the Convention for the Protection of Individuals, a legal framework for the use of facial recognition should be installed. This means that the use of biometric data should only be used if the processing relies on a legal basis that is listed in domestic law.

Convention 108 proposed a set of guidelines for private and public entities to adhere to to ensure that technology is not used to infringe on human dignity, human rights, democratic rights, and freedoms of any person.

Facial recognition has been taken further by companies to identify employees’ emotions, feelings, personality traits, levels of engagement, vocabulary, compatibility to work with others, and mental health status.

This form of ‘affect recognition’ has aided companies in engaging in their recruitment phase at a faster pace than usual. Examples of this have been seen recently by companies such as Intel, Unilever, Vodafone, Goldman Sachs, and JP Morgan, who have used this advanced technology in generating an automatic employee compatibility score.

However, this form of streamlined recruitment has jeopardised the ability for employees to be recruited under fair and equal circumstances, which attends other, common recruitment practices that would not use this technology.

Thus, employers who use facial recognition should give details on the purpose of its use, information on the duration that employee’s facial profiles will be held in the companies databases, what safeguarding measures are provided by the employer, and the reliability and accuracy of the surveillance technologies used.

Covid-19 and facial recognition

Since the start of the Covid-19 Pandemic in 2019, surveillance technology has been used by employers to extend their monitoring hand in keeping track of employees who work from home.

Since then, companies have increased their biometric tools in screenshotting, audio and video recording, sometimes even monitoring employees’ keystrokes, in some cases without the employee’s knowledge.

This infringement of persons surroundings is concerning common when workers are provided with work devices, leaving them with little to no voice in controlling how and when they may be monitored.

This new form of home surveillance has a significant impact on employees mental health and well-being, due to the idea that they are monitored every second of their working hours, whilst dealing with the stress of not knowing who is behind a computer screen watching them.

This form of surveillance serves as a key demotivating factor in an employee’s work ethic as, essentially, an employer is fostering a culture of mistrust and unjustifiable regulation.

Although surveillance technology on employees’ computers is consensual, it does not take away from the fact that it is intrusive in taking over a person’s home environment.

An example of this can be seen in the case of Pricewaterhouse Coopers (PWC) which used facial recognition technology to track their employees through webcams at home, and question any time spent away from their desks or computer screens.

Similar technology has been used by the Metropolitan Police, who, over recent years, have invested heavily in the purchasing of facial recognition technology. Whilst the Met have justified this action by stating it has significantly upgraded the police force’s technological capabilities, it has coincided with an increase in racial discrimination, unreliable privacy breaches, and abuse of police technology rights.

This has essentially created an environment prone to mistrust and abuse, which needs to be reviewed considering its human rights impact.

The monitoring of keystrokes and voice recording could put employees at risk, as this advanced technology can capture an individual and their family’s sensitive and private information.

Recommendations

Despite growing outrage over biometric surveillance technologies, governments have disappointingly not addressed the issue with the urgency it so needs.

In 2018, the European Court of Human Rights ruled that the UK’s mass surveillance laws have not done enough to protect individuals’ rights. Rather, they have unlawfully breached the rights to privacy and freedom of expression of its citizens for decades.

Ultimately, private businesses are not recommended to use facial recognition technologies in uncontrolled or unsecured environments. Coupled with this, there should be extensive policies that ensure the safeguarding of using such technology.

If facial recognition technology is to be used, businesses should:

  • Introduce and follow strict guidelines with transparency that ensure the protection of employees’ privacy rights and personal freedoms, including the security of their private and personal data.
  • As mentioned by The Council of Europe in 2021, introduce guidelines that prohibit any form of facial recognition that is used for the sole purpose of identifying an employee’s race, sexuality, gender, ethnic origin, health, social status or age. This will ensure that surveillance technologies are not used by employees to discriminate against their employees based on visual presentation.
  • Undergo regular Data Protection Impact Assessments (DPIA) when dealing with a human subject in high-risk environments. A data protection impact assessment is the process under which companies can strategically identify the risks that arise from processing personal data and help to mitigate risks and violations before they take place.
  • Give employees the freedom to object and demand additional information on their company’s surveillance technology. Through this, employees should be allowed to request to review their data and privacy rights before signing employment contracts.
  • Introduce employment laws that are regularly updated to keep up with the fast-evolving nature of technology and its use for surveillance in the employment sector.
  • Reduce the use of facial recognition as it is increasingly difficult for black people. This is because they have commonly struggled more than lighter-skinned employees when using face-scanning systems.

The Data Protection Act 2018 does provide protections. They are however poorly understood and all too often abused.

Employers must produce a Data Protection Impact Assessment. Furthermore, the GDPR’s Article 35 of the GDPR, such an assessment should be provided. Yet they rarely do.

What is more, the Government are seeking to undermine the protections. Indeed, the abuse of facial recognition is just the start of how the state are taking liberties with our personal information.