The CEO of EqualAI, Miriam Vogel, spoke on AI in an interview and mentioned the importance of responsible AI innovations.
EqualAI, a nonprofit organization founded to reduce implicit bias in AI and promote responsible AI governance, is led by CEO Miriam Vogel. She teaches technology law and policy at Georgetown University Law Center and chairs the National AI Advisory Committee, which President Joe Biden and the White House recently established. Congress established this committee to guide AI policy.
In his prior role as associate deputy attorney general at the Justice Department, Vogel provided counsel to the attorney general and deputy attorney general on an extensive array of operational, policy, and legal matters. Vogel served as a senior advisor to the Center for Democracy and Technology and a Responsible AI Institute board member. He counseled the White House leadership on various issues, including criminal justice, economic, regulatory, food safety policy, and women’s issues.
In summary, what prompted your interest in AI? What drew your attention to the field?
Before entering eleventh grade, I began my professional tenure in government as an intern in the Senate. I spent several summers working on Capitol Hill and then the White House after contracting a virus in the policy realm. During that period, my attention was directed toward civil rights, an unconventional trajectory toward artificial intelligence that, in retrospect, seems entirely logical.
I transitioned from an entertainment attorney specializing in intellectual property to an executive branch position involving civil rights and social impact following graduation from law school. During my tenure at the White House, I had the honor of presiding over the Equal Pay Task Force. Furthermore, as associate deputy attorney general reporting to former deputy attorney general Sally Yates, I oversaw the conception and development of implicit bias training for federal law enforcement.
EqualAI appointed me as its leader based on my expertise in policy addressing bias and systemic harms and my experience as a technology lawyer. I was drawn to this organization because I recognized that AI represented the forthcoming frontier in civil rights. In lines of code, decades of progress could be undone without vigilance.
AI can present astounding new opportunities for the flourishing of more populations, as I have always been enthusiastic about the possibilities that arise from innovation. However, I remain convinced that this can only occur with utmost caution during this critical juncture, wherein we ensure that more individuals can participate meaningfully in its conception and development.
How does one overcome the obstacles presented by the male-dominated technology sector and, by extension, the male-dominated AI sector?
Every individual is responsible for ensuring that artificial intelligence is as beneficial, effective, and efficient as feasible. This necessitates increasing our efforts to amplify the participation of women (who, incidentally, constitute over 85% of purchases in the U.S., making it a prudent business decision to ensure their interests and safety are taken into account) and other underrepresented groups representing diverse age groups, regions, ethnicities, and nationalities that are not adequately engaged in the development process.
In pursuing gender parity, more voices and perspectives must be considered when developing AI to function for all consumers rather than solely benefiting the developers.
What recommendations have you made for people interested in entering the AI field?
To begin with, there is always time. Negatively never. It is highly recommended that all grandparents give OpenAI’s ChatGPT, Microsoft’s Copilot, or Google’s Gemini a shot. It will be necessary for all of us to acquire AI literacy skills to succeed in an economy that AI increasingly drives. That is indeed thrilling! Each of us has a role to play. Whether they are beginning a career in AI or using AI to support their work, women should experiment with AI tools to determine their capabilities and limitations, whether they work for them, and become generally AI-literate.
Additionally, the development of responsible AI necessitates a workforce beyond ethical computer scientists. Although there is a common misconception that expertise in AI is limited to computer science and other STEM fields, AI actually benefits from the perspectives and knowledge of men and women of all backgrounds. Join in! Your perspective and voice are required. Your participation is vital.
As AI evolves, what are some of the most urgent challenges it must confront?
We must first increase our AI literacy. EqualAI maintains an “AI net-positive” stance, signifying our conviction that AI will furnish unparalleled prospects for our economy and enhance our everyday existence—albeit contingent upon the equitable provision and utility of these opportunities for a broader demographic. We must ensure that our current workforce, the next generation, and elders possess the necessary knowledge and abilities to capitalize on artificial intelligence.
Second, standardized metrics and measurements must be developed to evaluate AI systems. Standardized evaluations will be of the utmost importance in establishing confidence in our AI systems, as they will enable downstream users, regulators, and consumers to discern the limitations of the AI systems they interact with and ascertain whether such systems merit our trust. Gaining insight into a system’s intended users and applications is crucial in addressing the fundamental inquiry: For whom might this fail?
What concerns should AI users be aware of?
Simply put, artificial intelligence is synthetic. Humans constructed it to “imitate” human cognition and assist humans in their endeavors. We must exercise appropriate skepticism and conduct thorough research before relying on this technology to guarantee that we deal with trustworthy systems. Humanity can be supplemented by AI but not replaced.
We must maintain a lucid understanding of AI’s fundamental components: data, which mirrors human conversations and interactions, and algorithms, which are products of human involvement. Consequently, AI adapts and reflects our human faults. Embedded biases and damages may permeate the entire AI lifecycle, whether via human-written algorithms or data representing a moment in time. Nevertheless, each instance of human interaction presents a chance to detect and alleviate the possible damage.
Because human imagination is constrained by its limitations and AI programs are constructed with specific constraints, a team comprising individuals with a greater diversity of experiences and perspectives is more likely to identify biases and other safety vulnerabilities that may be present in the AI.
What is the most responsible approach to developing AI?
The development of AI deserving of our trust is our shared responsibility. Expecting someone else to do it for us is unreasonable. We must pose three fundamental questions: Who are this AI system’s intended users? (2) What were the envisioned use cases, and (3) Who are the potential failure targets? Even with the consideration of these inquiries, pitfalls are inevitable. Designers, developers, and deployers must adhere to best practices to mitigate these risks.
EqualAI advocates for “AI hygiene,” which consists of framework planning, accountability assurance, standardization of testing and documentation, and routine auditing. Furthermore, we have recently released a manual on developing and implementing a responsible AI governance structure. This manual outlines the principles, values, and structure that regulate the responsible implementation of AI within an organization. Organizations of any size, sector, or level of development that are in the process of adopting, developing, utilizing, and implementing AI systems with an internal and public commitment to do so responsibly will find this paper valuable.
How can investors promote responsible AI more effectively?
Investors bear disproportionate responsibility for ensuring our AI is secure, efficient, and accountable. Investors can verify that funding-seeking companies know and plan to mitigate potential liabilities and damages associated with their AI systems. Engaging in a constructive inquiry regarding the implementation of AI governance practices is equivalent to commencing the process of guaranteeing improved results.
This endeavor benefits the general public and serves the optimal interests of investors, who will seek to ascertain that the affiliated companies they have invested in are not marred by negative publicity or entangled in legal disputes. A commitment to responsible AI governance is the most effective method for establishing and maintaining public trust, one of the few non-negotiables for a company’s success. Robust and reliable AI is prudent for business.