The Surprising Office Politics of Robots

Craig Armour
6 min readAug 26, 2020

I am sure would not be alone by saying I hate office politics. The conniving antics and empire-building that seems to go hand in hand with many a corporate life. It is at a minimum a distraction on getting the job done and can be overall, pretty harmful to organisational performance.

Part of the economics of how businesses work is, internally at least, team members are not subject to open market dynamics. You do not have to compete with the person next to you for your next paycheque. Equally, if you need a set of skills or capabilities, you can just go find the right person for the job.

In theory…

In practice we compete for all sorts of things: pay raises, promotions, power & influence, or more simply not getting made redundant all play into our political struggles within corporate life. The harsh fact is each action we take promoting personal ambitions is likely taken to the detriment of the organisation.

What it all comes down to is an alignment of purpose. The more aligned an organisation or collective is behind a common purpose, the greater the performance towards that purpose. Internal politics will just introduce something else to put your mind to, to distract you from that central purpose.

When I started thinking about Autonomous Organisations, one of my first thoughts was we might see an end to office politics. Certainly, if there are fewer humans in the organisation, there is less opportunity for the kind of politics we might be used to. What we could end up with though is a sort of a digital backstabbing around a virtual water cooler.

The problem comes when an autonomous agent gets to choose its actions rather than just following direction. It could choose to:

1. Perform an action without consideration to the value it creates, including no action

2. Perform an action to maximise the real or perceived value for the system (ie. your business)

3. Perform an action to maximise the real or perceived value for itself

We have become used to, or even complacent of algorithms that do the first. A hard-coded set of rules that does exactly what it is told to in a rational, explainable sort of way. We are now moving towards decisions systems that at least aim to do the 2nd. It is only a short hop towards making choices to help the system and really, just helping itself.

The problem is how would you know? Just because it is purporting to be acting for the collective benefit of your organisation? That little black box in the corner that just figured out how to increase its licence fees from you or have your drop its competitor entirely.

Digital Backstabbing & Asimov’s 3 laws:

Isaac Asimov, in revolt to the mass of science fiction stories of robots going on a power-tripping killing spree, went about defining a set of laws, or ethics if you like, for robots. Asimov’s three laws of robotics in effect, have kickstarted a conversation on ethics in AI and Robotics.

1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What you will notice is there is nothing in there about the harm and destruction of other robots, only a provision to harming humans. Under these guidelines, it is possible robot bullying could even form. Unplugging a robot from the system, or even “accidentally” deleting it entirely. As long as, as per the rules, no injury is done to a human.

Injury is, however, a broad concept. I’m sure when Asimov wrote these laws he was thinking about physical harm or injury. You know, killing people. Injury also has a legal concept which appears to take in pretty much anything that leaves you legally liable.

Destruction or damage to another robot could perhaps be considered as indirect injury to the property of another Human if a human owns that robot in the first place. Even if you take this position, the actual legal culpability of said bad robot is less clear cut than you might think.

So, it is open season. Robots attacking other robots, vindictively or otherwise, may very well be fair game. But why and how would this happen? It’s all in the economics.

A dog eat dog world

Let us assume for a minute that someone wouldn’t actually program a robot to be a nasty little so and so. The robots would have to learn to be political, backstabbing monsters on their own. We certainly don’t need to look very far to see how that might happen.

1. Robots need to be acting in a competitive environment

2. They need to be incentivised in a way which encourages individual profit

3. Their actions would be unconstrained or minimally constrained by other rules

What we can learn from these little swiss robots is given the above environment, it doesn’t take much for a robot to learn to act in its own interests all on its lonesome. A veritable lord of the flies for robots. This happens because in the vast number of cases, what is in the best interest of the collective isn’t necessarily the best interest of the individual.

You could call it selfishness but mathematically it is called an unstable Nash equilibrium. It is why people weave in and out of traffic or take rat runs when everyone would be better off if they just didn’t. And that is even before we start feeding those robots data from our own decisions.

Systems based on machine learning are typically designed to, initially at least, replicate human behaviour. In this case, we would take data from all the human decisions made in the organisation, including the good, the bad, and the downright political. Without further guidance, a decision system based on that data would probably make the same sort of decisions you might, with all the biases, presumptions, and selfish tendencies that might entail.

Guidance is the key here. There is often little or no consequence for a robot that does something bad. Without a negative feedback loop, it will keep learning strategies that provide the best payoff. Strategies that may lead to robots acting in their own interests rather than that of the business that hires them.

Consequences for Business

The real problems for business come when they start introducing black-box decision systems into their organisation. Without transparency behind the decision and learning process, there may very well be nothing stopping decision drift. A slow, subtle process away from the unadulterated decisions we expect of our robots towards something less than ideal.

It’s like Pavlov’s dog; If you keep rewarding certain behaviours over others, eventually your dog (or robot) will adopt those behaviours. The challenge comes in understanding what feedback to provide and the implications of that feedback. In some ways, it’s simpler for robots than say humans, but we also lose some of the implied rules we take for granted, like moral behaviour.

It all comes back to the idea that decisions should be designed appropriately, especially algorithmic ones. If AI or machine learning is introduced into that algorithm, careful consideration of the training model needs to be had to ensure we generate the right behaviours. A problem that is likely to be complex, where subtle differences early on could lead to vastly different outcomes.

Credit where credit is due, any backstabbing from a robot isn’t likely to be personal. They probably hate us all equally, they are, after all, just evolving their behaviours within the rules we set them. Understanding those conditions and their consequences might just help us to better understand the behaviours of our human counterparts.

In the meantime, it might be naive to think that your new robot friend is making decisions in your interests and not theirs. Robots have excellent poker faces. Short of a slimy handshake, it might be time to think about what other data you need to prove one way or another.

--

--

Craig Armour

Focus your opportunities, optimise your delivery: Specialist in Business Value Ecosystems, Technology Delivery & Innovation. Learn more: https://craigarmour.com