Here is an excerpt from an article about the future relationship between humans and technology. The last part, shown below, applies to business and management and raises some interesting questions.

Q: What sort of relational technologies might a manager turn to?

A: We've already developed machines that can assess a person's emotional state. So for example, a machine could measure a corporate vice president's galvanic skin response, temperature, and degree of pupil dilation precisely and noninvasively. And then it might say, "Mary, you are very tense this morning. It is not good for the organization for you to be doing X right now. Why don't you try Y?" This is the kind of thing that we are going to see in the business world because machines are so good at measuring certain kinds of emotional states. Many people try to hide their emotions from other people, but machines can't be easily fooled by human dissembling.

Q: So could machines take over specific managerial functions? For example, might it be better to be fired by a robot?

A: Well, we need to draw lines between different kinds of functions, and they won't be straight lines. We need to know what business functions can be better served by a machine. There are aspects of training that machines excel at-for example, providing information-but there are aspects of mentoring that are about encouragement and creating a relationship, so you might want to have another person in that role. Again, we learn about ourselves by thinking about where machines seem to fit and where they don't. Most people would not want a machine to notify them of a death; there is a universal sense that such a moment is a sacred space that needs to be shared with another person who understands its meaning. Similarly, some people would argue that having a machine fire someone would show lack of respect. But others would argue that it might let the worker who is being fired save face.

Related to that, it's interesting to remember that in the mid-1960s computer scientist Joseph Weizenbaum wrote the ELIZA program, which was "taught" to speak English and "make conversation" by playing the role of a therapist. The computer's technique was mainly to mirror what its clients said to it… ELIZA was not a sophisticated program, but people's experiences with it foreshadowed something important. Although computer programs today are no more able to understand or empathize with human problems than they were forty years ago, attitudes toward talking things over with a machine have gotten more and more positive. The idea of the nonjudgmental computer, a confidential "ear" and information resource, seems increasingly appealing… When I've found sympathy for the idea of computer judges, it is usually because people fear that human judges are biased along lines of gender, race, or class. Clearly, it will be awhile before people say they prefer to be given job counseling or to be fired by a robot, but it's not a hard stretch for the imagination.

I think emotional attachment to cognitive machines will follow the same pattern as most other paradigm-shifting technologies. First people will hate them and try to ban them, then the next generation will be comfortable using them, then finally when a generation has grown up with them they will accept cognitive machines as a normal part of life.

Lucidchart: The Best Value Flowchart Software for Small Businesses in 2016

(If it happens, I think the first thing we should put the robots in charge of is accounting. Removing the human aspect of it is the only way to prevent another Enron or Worldcom disaster.)