Governing Artificial Intelligence: An Interview with Rumman Chowdhury
In the past year, the concept of AI, along with its associated advantages and drawbacks, has quickly become a part of mainstream awareness. Kat Duffy and Dr. Rumman Chowdhury delve into strategies for addressing issues and guaranteeing that AI proves beneficial to society.
![Governing Artificial Intelligence: An Interview with Rumman Chowdhury](https://learnonlineschool.info/wp-content/uploads/2023/10/image-150.png)
Dr. Rumman Chowdhury: an American data scientist
Governing Artificial Intelligence: An Interview with Rumman Chowdhury
Dr. Rumman Chowdhury has been at the forefront of applied algorithmic ethics since 2017. She currently serves as the CEO and co-founder of Humane Intelligence, a non-profit organization committed to fostering algorithmic accessibility and transparency. Notably, she was recognized as one of Time Magazine’s 100 Most Influential People in AI for 2023. Prior to this, she held the position of Director for Twitter’s Machine Learning Ethics, Transparency, and Accountability team.
The transformational potential of artificial intelligence has become a focal point of discussions, from everyday conversations at kitchen tables to high-level discussions at UN Summits. We now explore what AI can construct to address significant societal challenges today and how we can channel attention and resources into these endeavors.
In tandem with investments in technological innovation, there is a pressing need for investments in AI systems that safeguard humanity against the pitfalls of algorithmic bias. This entails developing novel techniques for adversarial AI models capable of identifying misinformation, toxic speech, and hateful content. It also calls for increased investment in proactive methods for detecting illegal and malicious deepfakes, among other challenges.
The principle driving these investments is straightforward: for every funding request aimed at advancing new AI capabilities, an equivalent investment must be made in the research and development of systems designed to mitigate the inevitable harms that may arise.
The data supporting extensive language models gives rise to essential inquiries concerning precision and prejudice, as well as the feasibility of ensuring accessibility, scrutiny, or openness for these models. Can we establish substantial responsibility or transparency for large language models, and if so, what are viable methods for accomplishing this?
Indeed, defining transparency and accountability presents a considerable challenge. A recent resource from Stanford’s Center for Research on Foundation Models (CRFM) exemplifies the complexity of this issue.
![Governing Artificial Intelligence](https://learnonlineschool.info/wp-content/uploads/2023/10/image-151-1024x425.png)
The Center has recently unveiled a fresh index evaluating the transparency of foundational AI models, which assesses developers of these models (such as Google, OpenAI, and Anthropic) against a hundred diverse indicators aimed at characterizing transparency.
These indicators encompass aspects like the transparency of the model’s development process, its capabilities, potential risks, and its practical applications. In essence, delineating what constitutes meaningful transparency remains a substantial and evolving question.
Similarly, accountability poses its own set of challenges. While we aim to proactively identify and address potential harms, devising a non-reactive method of accountability remains a formidable task.
However, based on my upcoming study, it is apparent that, in a broad sense, most model evaluators (defined broadly) share common objectives. They seek secure access to an application programming interface, access to datasets for model testing, insights into the model’s usage and its role within an algorithmic system, and the capacity to establish their own evaluation metrics.
Intriguingly, none of the interviewees in the study requested direct access to the model’s data or code. This is noteworthy, as such requests often lead to contentious discussions between regulators, policymakers, and companies.
Artificial intelligence is a versatile technology, but its versatility does not inherently justify its unrestricted use as a general-purpose technology.
What particularly concerns you are applications of AI that lack appropriate mediation and have a direct impact on the quality of human life. By “unmediated,” I mean instances where decisions are made without substantial human involvement and the ability to make informed decisions regarding the model’s outcomes. This concern extends to a wide spectrum of AI system applications.
The market for powerful AI tools is experiencing exponential growth, paralleled by increased public accessibility to these tools. While the demand for AI governance is on the rise, it faces significant challenges in keeping up with the rapid evolution of AI in the market and technology.
What aspects of AI governance are of utmost importance in the short term, and which aspects can realistically be achieved in the immediate future?
Rather than aiming for regulations that align with every new innovation, I believe what we need are regulatory institutions and frameworks that demonstrate adaptability to the emergence of novel algorithmic capabilities. Currently, the field of Responsible AI lacks legitimate, empowered institutions equipped with clear mandates and subject matter expertise.
Of paramount importance are aspects such as transparency and accountability, as mentioned earlier, alongside well-defined criteria for algorithmic auditing and legal safeguards for third-party assessors and ethical hackers.
Large digital platforms are poised to play a significant role in the dissemination of AI-generated content. How can we harness existing standards and norms in platform governance to curtail the proliferation of harmful AI-generated content, and what additional measures should be taken to address this threat?
Generative AI has the potential to intensify the distribution of malicious deepfake content. Although not a complete solution, we can draw lessons from how platforms have utilized narrow AI and machine learning in conjunction with human decision-making to combat issues such as toxicity, radicalization, online harassment, and gender-based violence. These systems, policies, and approaches require substantial investment and enhancement.
Is there anything else you would like to address concerning AI development or governance?
One critical aspect that often goes unnoticed is the lack of effective public feedback mechanisms. Presently, there is a broken feedback loop between the public, government, and companies. It is imperative to invest in structured methods of obtaining public feedback, which can encompass expert and wide-ranging red teaming, bias bounties, and similar approaches. These mechanisms are crucial for identifying and mitigating the harmful impacts of AI.