In this era of the Fourth Industrial Revolution, there is much talk about artificial intelligence (AI). Equally widespread concern about AI, its role in society, and the implication of the society using AI.

In an interview with Korea Biomedical Review, Jarom Britton, regional attorney in Microsoft’s Health, Education and Public Sector in Asia, claims that Microsoft doesn’t have all the answers on these subjects but can make some suggestions within the broader industrial perspective.

“We are seeing governments taking an interest in developing policies that regulate AI, but at the same time it is such a new technology that we’re not always sure on how to approach it,” Britton said. “I think before we start regulating and introducing laws it is essential that we take a step back and understand the values that are implicated through AI that we need to protect and the principles that we as a society can agree upon.”

Such a discussion needs to include industry officials, governments, IT officials, academia, philosophers, economists and professions that the IT technology industry has not traditionally utilized, he added.

As of now, Microsoft has come up with six ethical principles for the development of artificial intelligence (AI) useful to humans -- fairness, reliability/safety, privacy, inclusiveness, transparency, and accountability.

Britton stressed that Microsoft believes that these six principles can help AI and humans collaborate as the success of healthcare AI platform will hinge on the outcome of the collaborations.

Jarom Britton, regional attorney in the Health, Education and Public Sector in Asia at Microsoft, explains Microsoft’s six principles in developing AI and other questions related to AI development, during an interview with Korea Biomedical Review at Severance Hospital, Sinchon-dong, Seoul, on Thursday.

​​​​​​Question: Will you explain who you are and what you do at Microsoft?

Answer: My role is a relatively new position at Microsoft. Just over a year ago, Microsoft realigned its sales team to focus by industry, and the legal department in Asia decided that it needs some specialization as well in the industry market. I take care of the health, education and government sectors. My role is not a typical lawyer’s role. I spend a lot of time meeting with customers, healthcare organizations and government agencies to help them understand how they could move to Microsoft’s cloud service or other technologies and use them in line with the regulatory requirements. I also take feedback from customers and send it back to the company and recommend changes at Microsoft.

Q: Could you tell us about the “six principles” Microsoft has in developing technology for humans?

A: Microsoft has six principals that it has discussed both internally and externally with experts.

The six principles include fairness, reliability/safety, privacy, inclusiveness, transparency, and accountability. These six principals are our initial thoughts that we believe are relevant to the development of AI.

Underlying all these principles is the concept of putting the humans at the center of AI development.

Q: Please explain each of them in detail.

A: Regarding fairness, which resonates well within the healthcare space, the principal is about eliminating bias such as false positives and negatives, and the error rate in clinical studies and diagnosis.

AI is only as good as the data it is trained on, and if the data is biased or incomplete, it can result in errors in the outcome.

The second point reliability/safety revolves around on how the AI can interpret the situation correctly.

That means making sure the AI is adequately trained but also making sure that developers are monitoring its development on an ongoing basis so that they can see how it’s performing and measure such performance in the real world.

At the same time, it is about recognizing that AI is not always going to perform flawlessly. In such cases, developers need to put a human back in charge as soon as possible and provide them with the information on what going wrong so that they can get the AI back on track.

For privacy and security, the two needs to be a significant concern because in AI, the more data we use, the more useful the platforms are. Therefore, developers need to have a firm grasp on what they are doing with the data and how are they controlling it.

Some people feel that they need to give up privacy to be able to get services from the AI technology. I don’t think that necessarily has to be the case, but I do believe that there need be control towers that can make sure the fundamental principles of privacy can be protected.

At Microsoft, we established an AETHER (AI and ethics in engineering and research) committee that consists of teams from various departments that monitors AI projects on an ongoing basis to make sure that the project is upholding the ethical values that we have identified as being important. The committee also has powers to stop any plans that violate such values or recommend changes to fit the company’s agenda.

For inclusiveness, any new technology can be inclusive or exclusive. There are people in society that have historically been more marginalized or have not been able to participate as entirely in the economy, society or community as they like or as others would want.

Therefore, it is essential whenever a developer develops an AI solution they need to keep in mind accessibility so that their development can empower people rather than disempower them or exclude them.

Transparency is the significant value that combines the four previously mentioned values.

One of the critics of AI is that we don’t know how it works. It arrives at an outcome, but we don’t see how it got there as it is all code. We need to get better as an industry, including Microsoft, on explaining how such process works.

Regarding accountability, when something goes wrong with an AI system, it is not acceptable for the developer to deny any responsibility.

In a legal standpoint, we need to discuss who is liable for mistakes that the AI has made.

I’m not going to suggest a right or wrong answer for that except that a human needs to be accountable as we cannot throw the AI in jail. Therefore, we need to develop a legal system that accommodates such aspects.

Q: Some fear that AI will take over jobs. Is there a solution to such concerns?

A: It is certain that AI is going to have implications for society. People will be displaced in the economy as it happens with any new technology.

With the development of new technology, jobs such as potters have gone obsolete, and that is not a bad thing as the technology has advanced, and society has benefited. However, it is important to think about the people left behind.

This criterion is where we require some thought around our social safety and security systems. The society needs to think about what are the skills we need to retrain the people left behind and how do we ensure these people fall out of the economy circle and stay out.

The government should think of ways to solve such problems, but I don’t think the government should come up with all the answers on their own. The question needs to be addressed as a joint effort between the government, businesses and the people affected by the change.

I think there are things that we can do as a society to make sure that such problems do not happen and AI ends up empowering us rather than having power over us.

Q: You mentioned that AI is not a miracle cure. What are the limitations of AI?

A: A good example would be that AI works by probability. So it will recognize a pattern, but as of now, I don’t think it is possible for the AI to take in every single variable.

We can say through gene analysis that a patient has a higher likelihood of developing cancer, but this does not mean that they will eventually develop cancer. This is because other variables can influence the results that aren’t included in one AI test. So no matter how many data we put into the AI it’s never going to be able to predict a 100 percent case.

Such problems are why we need humans to step in and be able to say that although there is a high likelihood, they do not have the disease due to other variables that are counter inductive or request additional tests to confirm the results.

Right now, some biases have led some groups to believe that technology has all the answers, but as AI works on probability, we need to human to step in and confirm the results.

Q: Is there any other comments that you would like to make to Korean AI developers or doctors who are interested in AI?

A: Microsoft is keen on working with AI developers and doctors. The company does not plan to develop an AI solution that is going to take over the entire healthcare industry. Our model is to provide the tools to such developers so that they can develop a solution that the patient or industry needs.

Our question to Korean AI developers and doctors is what would you like to do in Korea or export from Korea, and what can Microsoft bring to the table that can help the process.

Copyright © KBR Unauthorized reproduction, redistribution prohibited