After a couple of years in the doghouse, the policing technology nicknamed Digidog is returning to the mean streets of New York. Withdrawn by former New York City Mayor Bill de Blasio in 2021 after protests by civil liberties groups, the four-legged robotic friend, made by Boston Dynamics, is being brought back by current Mayor Eric Adams, alongside a Knightscope K5 security robot and a device that fires a GPS tracker onto a fleeing vehicle.
The K5, looking like a giant easter egg on wheels, is mainly equipped with cameras, microphones, and loudspeakers, and has form for running over a toddler in Silicon Valley and falling into a fountain in Washington DC. The dog, also equipped only with surveillance kit, was condemned as creepy and dystopian when first deployed in 2020.
The argument for more or less autonomous remote surveillance machines — eggs, dogs or drones — is that they can go into dangerous situations instead of risking human lives. Unarmed machines also pose less of a threat to a suspect than a nervous or angry human police officer with a gun. Nevertheless, the inhuman nature of the surveillance is symbolic of a relationship between police and policed that has little trust, let alone human warmth.
Yet the real issue here is not the robots, but instead the rules governing their use. In 2020 New York City Council passed the POST Act — Public Oversight of Surveillance Technology. Since then, the New York Police Department has been required to publish draft Impact and Use Policies for all surveillance technology it intends to use, or was already using before the POST Act came into force. This would include Digidog, K5, and any other shiny new toys the NYPD adds to its toolkit.
This is an admirable step in bringing police use of technology into the realm of public accountability, and goes well beyond what most UK forces do. The New York City Department of Investigation has oversight of these policies, and is responsible for checking that actual use of the technology follows the published policies.
Unfortunately, the NYPD interpretation of the POST Act is so loose that this oversight is impossible to carry out, as policies cover multiple technologies, and neither storage nor sharing of data are monitored or governed by the policies.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe“Black Mirror”. We are there.
“Black Mirror”. We are there.
“Unarmed machines also pose less of a threat to a suspect than a nervous or angry human police officer with a gun.”
Because the last thing we want is for a suspect to feel a threat.
But if they’re already running over toddlers, are they truly less of a threat? Do we want to give a machine a gun where it might shoot someone because of a technical glitch? I think there is a role for these devices in law enforcement, but there should be some pretty hard limits imposed.
But if they’re already running over toddlers, are they truly less of a threat? Do we want to give a machine a gun where it might shoot someone because of a technical glitch? I think there is a role for these devices in law enforcement, but there should be some pretty hard limits imposed.
“Unarmed machines also pose less of a threat to a suspect than a nervous or angry human police officer with a gun.”
Because the last thing we want is for a suspect to feel a threat.
I for one welcome our new robot dog overlords.
I for one welcome our new robot dog overlords.
If ever there was a perfect tool to impose Fascism and a Totalitarian nightmare on a society…… it is programmable robots as a police force.
Stop this madness immediately.
If ever there was a perfect tool to impose Fascism and a Totalitarian nightmare on a society…… it is programmable robots as a police force.
Stop this madness immediately.
one robot has the IQ of every ‘ pleeceman” in Britain- some of the thickest people in the nation.
one robot has the IQ of every ‘ pleeceman” in Britain- some of the thickest people in the nation.
The device shooting a GPS tracker into a fleeing vehicle sounds useful. I can’t see the problem in the police using a surveillance device to check out potential criminal activity whether in the form of a dog or other device. Perhaps someone can explain why it is sinister.
Feature creep? I would agree that we need strict regulation; we have a danger of going too rapidly down the ED-209 path. (The misuse of surveillance data is a separate issue).
Get worried if the captchas start changing such that instead of training self-driving cars, we’re being asked to ‘select the person carrying a gun’.
Like any tool, including firearms, it can be used for good or for bad. I think some of the angst over robots and AI is irrational fear of new technology. One can go back and find similar fears of things like the steam engine, the automobile, etc. What we should really worry about is what unscrupulous greedy or power hungry people will use these things to do. That, I think, is where the fear is legitimate.
…..unscrupulous greedy or power hungry people…..You mean “Politicians?”.
“One can go back and find similar fears of things like the steam engine, the automobile, etc.”
Yes, it’s not as though automobile would go on to lead to millions of deaths.
Touche. Well played sir.
Touche. Well played sir.
…..unscrupulous greedy or power hungry people…..You mean “Politicians?”.
“One can go back and find similar fears of things like the steam engine, the automobile, etc.”
Yes, it’s not as though automobile would go on to lead to millions of deaths.
There is the kernel of an answer within your question:
How can we be sure robots “shooting a GPS tracker” hit their target, and not an innocent bystander? And, should it happen, who is responsible?
I’m compelled to comment that the perfect is the enemy of the good.
I’m compelled to comment that the perfect is the enemy of the good.
Feature creep? I would agree that we need strict regulation; we have a danger of going too rapidly down the ED-209 path. (The misuse of surveillance data is a separate issue).
Get worried if the captchas start changing such that instead of training self-driving cars, we’re being asked to ‘select the person carrying a gun’.
Like any tool, including firearms, it can be used for good or for bad. I think some of the angst over robots and AI is irrational fear of new technology. One can go back and find similar fears of things like the steam engine, the automobile, etc. What we should really worry about is what unscrupulous greedy or power hungry people will use these things to do. That, I think, is where the fear is legitimate.
There is the kernel of an answer within your question:
How can we be sure robots “shooting a GPS tracker” hit their target, and not an innocent bystander? And, should it happen, who is responsible?
The device shooting a GPS tracker into a fleeing vehicle sounds useful. I can’t see the problem in the police using a surveillance device to check out potential criminal activity whether in the form of a dog or other device. Perhaps someone can explain why it is sinister.