Listen

Responsible Adoption of AI in a Cloud Environment

Authors
Ashok Panda; Adriano Koshiyama; Nigel Kingsman; Emre Kazim; Airlie Hilliard; Rosaline Polle; Umar Mohammed; Giulio Filippi; Markus Trengove.

As business reliance on algorithmic systems becomes ubiquitous, algorithms will make billions of decisions with minimal or no human intervention, including decisions with important financial, legal and political implications. Estimates suggest that AI will contribute approximately $16 trillion to the global GDP by 2030. We’re also seeing the rise of cloud-based AI, which brings together cloud and AI, two technologies that have witnessed widespread growth and adoption during the past decade.

There are several benefits of implementing AI in the cloud, spread across cost, operations, robustness, and privacy. For instance, pushing ML operations to the cloud eliminates outsized on-premises operations overhead, thereby reducing machine learning package related installations, hardware and software conflicts, and ML-specific vulnerability updates.Cloud-based AI utilizes robust model backup protocols by design, ensuring business continuity in the event of failure, thereby protecting against high model re-training costs. Moving ML operations to the cloud also allows users the benefit of best-in-class enterprise data protection and privacy infrastructure.

Risks Associated with Cloud-based AI
Despite the transformative potential of algorithmic systems, the reach of their effects, combined with the paucity of supervision, can bring certain reputational, financial, and ethical risks. This is especially true when they are unfair, untransparent, inaccurate, or handle private data inappropriately. Volkswagen's ‘Dieselgate’ scandal, which resulted in fines of $34.69B and Knight Capital’s bankruptcy with ramifications exceeding $400M are two high-profile examples of the potential costs of adopting unsafe algorithmic systems.

These risks are exacerbated along some dimensions in cloud computing. For instance, cloud-based AI offerings suffer from increased latency at model inference time when compared to on-premises implementation. Cloud-based AI necessitates data and information transfer between the user and the cloud provider. This generates a new point of data protection vulnerability, especially as compared to a fully internal on-premises implementation.

Users of cloud-based AI do not have direct access to the model and data and time-stamped snapshots of both. This can lead to a paucity of acceptable explanations pertaining to model predictions on a post-hoc basis. Moreover, this adds to regulation risks when such explanations are in response to regulator requests. Also, the generation of personal sensitive data by ML models can lead to a regulatory scope that is beyond the AI cloud provider’s standard regulatory overhead, generating regulation risk for the user.

The Need for Responsible AI
With the exponential growth in the use of algorithms, there is an acute need to ensure that such systems are governed appropriately. To facilitate the development of regulated and safe algorithmic systems and achieve Responsible AI, we can envision a new field: algorithmic auditing and assurance. This will operationalize and professionalize current theoretical research in Responsible AI, AI Ethics, and Data Ethics. The purpose of AI auditing and assurance is to provide standards, practical codes, and regulations to assure users of the safety and legality of their algorithmic systems.

Broadly, algorithmic auditing can provide a way to measure the robustness, transparency, fairness, and privacy of an algorithmic system against predetermined standards. It can help perform ex-ante assessments of the levels and types of risks in particular algorithmic systems and provide recommendations for risk mitigation and prevention strategies.

Implementing Algorithmic auditing
At each stage of model development, there are four key risk levers - privacy, fairness, explainability, and robustness – that have trade-offs and interactions between them. For instance, accuracy, a component of robustness, may need to be traded to lower any existing outcome metric of bias. Similarly, making the model more explainable may affect the system's performance and privacy. Improving privacy can affect the ability to assess the impact of algorithmic systems. The optimization of these features and trade-offs will depend on multiple factors including the use case, regulatory jurisdiction, as well as the risk appetite and values of the organization implementing the algorithm.

For instance, to ensure fairness, it is important to diagnose and mitigate bias in decision-making. There could be multiple sources of bias in AI and ML such as tainted examples or sample size disparity. Also, fairness could be interpreted very differently in different environments and different countries. Hence, one deployment of a given algorithm may encounter several different fairness measurement barriers.

Therefore, it is not mathematically possible to construct an algorithm that simultaneously satisfies all reasonable definitions of a "fair" or "unbiased" algorithm. Regardless of the measure used, however, algorithm bias can be mitigated at different points in a modelling pipeline: pre-processing, in-processing, and post-processing.

Similarly, another parameter is robustness, which is characterized by how effectively an algorithm can be deemed as safe and secure, not vulnerable to tampering or compromising of the data they are trained. Here, technical strategies can aid the analyst in measuring the expected generalization performance, detecting concept drifts, avoiding adversarial attacks, and having best practices in terms of systems development and algorithm deployment.

Overall, despite its numerous benefits, the use of algorithmic systems, particularly in the context of cloud computing, presents financial, reputational, and ethical risks. Putting a system in place for algorithmic auditing can provide effective assurance of the robustness, transparency, fairness, and privacy of an algorithmic system. In the future, we can envision the emergence of a new industry of algorithmic auditing and assurance at the centre of an ecosystem of trust in AI.