Opinion: Proactive Digital Intelligence for Law Enforcement

Tom Bäckström
4 min readJan 14, 2021

There has been a great deal of discussion in the public about the need for law enforcement agencies to actively pursue surveillance. The position of law enforcement has always been that they must have the ability to intercept communication to prevent terrorism, violent crime etc. While this argument is superficially reasonable in the sense that such an ability could have a positive impact, it does not take into account reality. Encryption is required for the security of all communication on the Internet, including bank transactions and military communications, and the technologies for such encryption are readily available.

Photo by Markus Winkler on Unsplash

To prohibit the use of efficient encryption would thus weaken the security of all communication, including those central to our safety. If there are backdoors to encryption, criminal and nation-state hackers will sooner or later obtain access to them. Yet, those same criminals and nation-state hackers will be able to use efficient encryption without backdoors because such software is readily available. You would also face the question of which nations are allowed access to the backdoor; Would you like North-Korea to access your email? Or, Sudan, Russia, Kuwait, or Canada? You have to draw the line somewhere. In other words, backdoors would weaken the security of everyone, except exactly those criminals, which we want to catch, with whose capture the weakening of encryption was motivated. There is thus no gain to be had in weakening encryption, only losses in security plus dilemmas of ethics and international politics.

The problem is, in my opinion, that we are approaching the problem from the wrong end. The focus should not be on an isolated part of the whole system, the encrypted channel, but on the big picture; What is it that we want to achieve? Safety. The question then becomes; How do we achieve safety when we have access to digital tools?

Before going forward, let us take a step back into how safety has been achieved in the pre-digital age. Without data in hand, I believe that the most common scenario is that a citizen observed something potentially dangerous and reported it to the authorities. For example, if Greg notices a fire he calls the fire brigade, or if Amanda sees men breaking a window in a dark alley, she’d call the police. In other words, the security system relies primarily on citizens’ observations, proactively reported to the authorities.

A conceptual tool which I have used before a similar purpose is the personification of devices. What if devices would behave similarly as persons? What if digital devices would proactively report suspicious activity just like citizens do? Smart speakers already do that; Some smart speakers can call the police when they identify dangerous situations. Conversely, recordings of smart speakers can be used as evidence for you or against you.

Now I am empathetically aware of the dangers here. I do not promote the ability of law enforcement to eavesdrop on your devices, but the opposite. Eavesdropping is intrusive and should be the last resort. Instead, I’m proposing to consider the option that security would be improved by designing smart devices to report suspicious activity to the police. Devices which have a sufficient capacity — say, an autonomous digital assistant — could have the functionality to contact the authorities when they detect something suspicious.

There are loads of practical and ethical dangers here. We should tread very, very carefully. They are however all questions of careful engineering and design. This is very different from backdoors to encryption, where the facts demonstrate that backdoors are always detrimental to safety. However if we design proactive functionalities into devices with great afterthought and oversight, I don’t see any impossible hurdles. Plenty of difficulties, yes. The devices will make errors and sometimes call the police when it should not have, and sometimes they will miss things they really should not have missed. There can be hidden biases such that some part of the population is unfairly targeted by the device. That is not unlike real people. Sometimes a neighbor mistakes you for a burglar when you arrive late in the night from a party. That is not a nice situation, but it happens.

Photo by Chris Nguyen on Unsplash

A central idea here is that it is an unrealistic target to require perfection from devices. We should design and use devices and services for our benefit. The same way people make mistakes, also devices will make mistakes. The important difference and advantage is that when devices become a nuisance, then we can turn them off. I cannot say the same about my nosy neighbour. She has no off button.

--

--

Tom Bäckström

An excited researcher of life and everything. Associate Professor in Speech and Language Technology at Aalto University, Finland.