What Is Ethical AI?

As Artificial Intelligence filters into ever more industries, a new conversation is emerging around its ethical use and the measures that must be taken to safeguard privacy and prevent bias. 

The last few years have seen a remarkable rise in Artificial Intelligence’s (AI) ubiquity across the business landscape. AI’s seemingly unparalleled scalability and breadth of applications have made it an increasingly popular asset, with the businesses utilizing AI solutions growing by 10% between 2018 and 2019 alone. Spending on AI is growing in turn. In 2020, annual global business spending on AI reached $50 billion. It is projected to surpass $110 billion by 2024. 

While it may seem like AI is an incredible new technology that is transforming both life and business as we know it, its proven efficacies must be evaluated alongside the potential. it poses. As more questions about the relationship between AI and privacy, data security, and bias emerge, the need to discuss and develop an ethical set of principles to guide our relationship with AI only grows. Read on to learn about the ongoing conversations around this tricky topic.


The Relationship Between Artificial Intelligence, Privacy, and Security

Any complete discussion of ethical AI has to consider a central question: Does AI help or hinder privacy and security efforts? There are compelling arguments on both sides of the debate.

The Good:

Those who say AI supports increased privacy and security point to AI-fueled anonymization tools that are designed to remove personal information from various mediums while keeping data models intact and in compliance with both government regulations and consumer expectations for trusted businesses. 

This is a compelling argument. There’s no doubt that AI can be effective when it comes to anonymizing datasets. The process is simple. An AI-driven tool called natural language processing (NLP) automatically anonymizes personal data from text such as emails, invoices, and customer orders by detecting words and numbers consistent with sensitive personal data. NLP can then ensure these details are accordingly labeled, geofenced to comply with country-specific regulations, or restricted to designated personnel for authorized purposes.

With a wide range of use cases from healthcare, to product reviews, and more, AI-backed data anonymization could make AI a powerful player in the data privacy and security space in the coming years. But that doesn’t mean AI is risk free.


The Bad:

On the other side of the debate, we have the people who argue that AI presents serious risks for privacy and cybersecurity. They point both to the dangers of adversarial AI and to a lack of U.S. government oversight that allows AI use amongst private companies to run wild. 

The fact is, all of the cybersecurity capabilities that make AI advantageous can also be exploited by bad actors through adversarial AI attacks. These attacks can take a couple forms. One involves attackers deploying AI before an actual attack and testing their malware against it, making modifications as needed to make it AI-proof. Alternatively, hackers can exploit machine learning models directly, causing them to misinterpret inputs into the system and then act in a way that’s favorable to the hacker. To achieve this, the hackers create “adversarial example” inputs that typically resemble normal inputs, but are instead optimized to break the machine learning model’s performance. 

As far as AI regulation, the U.S. is lagging when it comes to administering ethical AI use. Whereas the European Union has implemented the GDPR to govern acceptable AI practices, the U.S. lacks any uniform federal oversight. Instead, private companies are tasked with regulating themselves. This self-policing system leaves companies largely reliant on compliance rules and market forces — such as negative reactions from customers or shareholders — to govern their AI practices. Unfortunately, both of these can sometimes be at odds with the company’s self-interest. While we all wish that private companies would reliably act ethically, that’s not something we can count on — especially with something as nebulous as AI.


The Role of Bias in AI

Another core issue when it comes to AI and ethics is the ongoing presence of bias in AI programs. Though it’s nice to imagine that technological systems are free of the negative elements of humanity, AI systems remain a reflection of their human creators. That means they can be vulnerable to an assortment of human biases and errors, whether their creators are conscious of them or not. 

A great example of this bias comes from the facial recognition algorithms developed by Microsoft and IBM, which have displayed biases when detecting people’s gender. In these cases, the AI models detected the gender of white men more accurately than the gender of darker skinned men. Similar problems have arisen with voice recognition systems at companies like Apple, Amazon, and Google. And, just recently, Amazon discontinued their use of AI in their recruitment efforts because their algorithm proved to inherently favor male candidates over female ones. 

As the potential use cases of AI continue to multiply, the problems posed by AI’s biases stand to cause even more damage — particularly to minority communities. Fortunately, a few large companies are taking steps to investigate and try to address this bias. 

Given that some of the problems of AI bias result from the presence of bias in the data used to train AI systems, one proposed solution is to have companies create data statements explaining exactly what data is being used in the development of their systems and why. The hope is that this transparency could help ensure sufficient effort is made to eliminate bias in the training data. Of course, even if this approach is adopted, there will still be much to do to ensure widespread bias-free, ethical AI use. 


Maintaining Privacy and Security in the Face of AI

As with so many developments within the digital world, AI carries both risks and benefits. While the ideal would be to have ethical AI that is free from biases and formally regulated to prevent privacy or security concerns, we are still a long way from that reality. Until then it is up to us as individuals and business leaders to ensure that we are taking every possible measure to remain cybersecure in the face of any attack, whether it comes from adversarial AI or from a phishing email. 

At Turn-key Technologies, Inc. (TTI), we have 30 years of experience helping organizations of all shapes and sizes stay cyber secure. We’ve seen a lot of new technological developments in that time — and we’ve always found solutions to tackle the new vulnerabilities they bring. The same is true for AI. We can work with you to create a cybersecurity infrastructure that lets you make the most of your technology without having to worry about the potential risks it brings. 

Contact us today to learn how we can help your business stay cyber secure.

By Tony Ridzyowski


Sign up for the TTI Newsletter