They just taught it programming to play “dog interaction mode” and interact using a model that the dog would understand and would ‘engage’ with and through ‘playing’, and simultaneously (as mammals do) using that as a kind of social hierarchy placement challenge.
This becomes a kind of playful ‘match’ of social interactions and cues to see who’s more ‘proper’ in the routine. Therefore “winning”. Now the dog thinks this is fun and it looks pretty innocent and cute. The dog struts around happily, testing the stranger while showing his strength, status and potentially protecting his friends and family.
The machine is playing calculative checkers with the dog. There is no strain, no struggle, no perceived power imbalance. There is no problem, from the point of view of the machine’s data processing and inference capabilities (unless they upgraded that without telling anywon).
That is the issue. What if they enhanced the parameters to function on a domination mode and to continually reject the dogs attempts to placate or control and did this to the point of obtaining a certain goal? Here’s the deal, this could be done to corral a hundred thousand sheep.
The goal parameters could be set to perform this simple, social, dominance based ritual, until the mammalian is no longer moving or no longer resisting physical domination.
They’re doing this right in front of your face.
This is why there was a necessity to research the capacity of human interactions with artificial intelligence and cybernetics.