Ethical Considerations in AI and Cloud Computing: Ensuring Responsible Develop and Use

By Ed Watal, Founder & Principal — Intellibus

The history of ethics in computing goes back nearly 100 years. In the 1940s, when computers first appeared, the ethical concern was displacement of workers. Since then, ongoing discussions on ethics have focused on privacy, security, and fair access.

As artificial intelligence has taken center stage in the computing world, the question of ethics has come to the forefront once again. The widespread use of cloud computing as a data storage and management solution adds additional complexity to the discussion.

Pursuing a design that satisfies ethical concerns is critical to the ongoing development of AI. The following are key issues that developers and users must consider and address.

Data Privacy

Since 2015, the amount of corporate data stored in the cloud has doubled to 60 percent. The data includes information on employees, customers, corporate finances, and intellectual property. Some, if not most, of the data would be considered sensitive.

While storing that type of data is not new, storing it in the cloud raises new concerns about how it is safeguarded. Privacy and security are two of the top ethical considerations related to cloud computing.

A recent survey reveals that 80 percent of companies have experienced at least one security incident related to cloud storage during the past year. Bad actors are coming for the data in the cloud. Companies that leverage the cloud for the efficiencies it offers must make sure they have taken steps to repel them. Ethical practices require that companies investigate and confirm cloud providers have deployed robust privacy protections, encryption, and access controls.

The growth of AI adds to the privacy concerns related to data in the cloud. To function effectively, AI systems require massive datasets for training. Cloud platforms provide an efficient storage solution for data.

AI also adds ethical concerns related to how data is collected and used. Personal data collected for one purpose can be repurposed for another. Recent reports reveal developers are using a wide range of data meant for other purposes to train AI models.

To what degree consent should be obtained before using data for AI training is an ethical concern that must be addressed. Violations of copyrights and terms of service are other ethical issues that come into play when companies repurpose data for AI training.

Fairness and bias

Ethical concerns about the potential for bias in AI have haunted it since its inception. The main fear is that AI systems trained on biased data will perpetuate or amplify bias in their output.

AI systems deployed to assist with hiring provide an example of this. Critics fear the training AI systems receive could lead them to unfairly discriminate against certain demographics based. In August 2023, the US Equal Employment Opportunity Commission settled a hiring discrimination case against a company whose AI-powered hiring software was found to have an age bias.

Ethical practices require that developers prevent historical biases and lack of representation from influencing AI systems in ways that unfairly skew their output. Developers also must protect against algorithmic bias that can develop even when AI training is careful to avoid biases. Ongoing algorithmic audits are essential for safeguarding against bias and unfairness.

Transparency

Transparency and ethics go hand in hand. With AI, transparency is an essential ethical practice that plays a role in meaningful consent, accountability, and algorithmic auditing. Transparency is essential for driving public acceptance and trust in AI.

AI has been accused of having a “black box” problem, referring to the lack of transparency in how it operates and the logic behind its decisions. The use of complex algorithms and proprietary systems contributes to the problem. Ethical practices must address the black box issue by ensuring a high level of transparency in AI development and deployment.

If AI is to be trusted to guide critical decisions in areas such as medical diagnosis and criminal justice, transparency must be incorporated into its ongoing development. Failure to do so could lead to consumer distrust, security risks, and regulatory blocks.

Accountability

Assigning responsibility for the outcomes provided by AI-driven systems is perhaps the most important ethical consideration to be considered. If an AI-powered system guiding medical diagnosis makes a decision that leads to failed medical treatment, who should take responsibility? Is the AI developer, the technology firm that deployed the AI, or the doctor ultimately accountable for the bad information?

Determining accountability in AI use is both a practical and ethical consideration. Consider the development of AI for self-driving cars. If there is no driver at the wheel, with whom does liability for a collision lie? Is the automaker responsible for the crash, the AI developer, or the consumer or organization that owns the car?

To ensure the ethical development and use of AI, all parties involved must take on a degree of accountability. Developers must implement transparency, enable auditing, and be quick to admit when problems arise. Companies deploying AI in their operations must understand the potential for abuse, adopt strong use policies, and communicate with transparency about AI’s capabilities, limitations, and functions.

AI end users can also play a role in promoting accountability. By verifying the information AI provides, providing feedback to AI developers, and sharing concerns in public forums, users can assist in promoting ethical standards for AI development and use.

AI and cloud computing bring organizations in virtually every industry new opportunities to improve efficiency, productivity, and scalability. They also bring new challenges, especially in the area of ethics. Organizations involved in both the development and use of these technologies must pay close attention to the concerns explored above to avoid the reputational and legal risks that result from unethical practices.

Ed Watal is an AI Thought Leader and Technology Investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, an INC 5000 “Top 100 Fastest Growing Software Firm” in the USA, and the lead faculty of AI Masterclass — a joint operation between NYU SPS and Intellibus. Forbes Books is collaborating with Ed on a seminal book on our AI Future. Board Members and C-level executives at the World’s Largest Financial Institutions rely on him for strategic transformational advice. Ed has been featured on Fox News, QR Calgary Radio and Medical Device News.