Technology
Women in AI: UC Berkeley’s Brandie Nonnecke says investors should push for responsible AI practices
To give women AI academics and others their well-deserved – and overdue – time in the highlight, TechCrunch is launching a series of interviews specializing in the extraordinary women who’re contributing to the AI revolution. As the AI boom continues, we are going to publish several articles all year long, highlighting key work that usually goes unnoticed. Read more profiles here.
Brandy Nonnecke is the founder and director of the CITRIS Policy Lab on the University of California, Berkeley, which supports interdisciplinary research to reply questions on the role of regulation in promoting innovation. Nonnecke can also be co-director of the Berkeley Center for Law and Technology, where she leads projects on artificial intelligence, platforms and society, and the UC Berkeley AI Policy Hub, an initiative to coach researchers to develop effective AI governance and policy frameworks.
In his spare time, Nonnecke hosts the TecHype video and podcast series, which examines emerging technology policies, regulations and laws, provides insight into the advantages and risks, and identifies strategies for putting technology to good use.
Questions & Answers
Briefly speaking, how did you start in artificial intelligence? What drew you to the sphere?
I actually have been involved in the responsible management of artificial intelligence for almost a decade. My background in technology, public policy and their intersection with social impact drew me to this field. Artificial intelligence is already ubiquitous and has a huge effect on our lives – for higher and for worse. I’m committed to contributing significantly to society’s ability to make use of this technology for good, reasonably than standing on the sidelines.
What work are you most happy with (in AI)?
I’m really happy with two things we achieved. First, the University of California was the primary university to determine a responsible AI policy and governance structure to higher ensure responsible purchasing and use of AI. We take seriously our commitment to serve society in a responsible manner. I had the honour of co-chairing the University of California’s Presidential Working Group on Artificial Intelligence and its subsequent everlasting Council on Artificial Intelligence. In these roles, I used to be capable of gain first-hand experience fascinated by how best to implement our Responsible AI principles to guard our faculty, staff, students and the broader communities we serve. Second, I consider it is amazingly vital that society understands emerging technologies and the actual advantages and risks related to them. We launched TecHype, a video and podcast series that explains emerging technologies and provides guidance for effective technical and policy interventions.
How do you take care of the challenges of the male-dominated tech industry, and by extension, the male-dominated AI industry?
Be curious, persistent and undaunted by imposter syndrome. I consider it is amazingly vital to hunt down mentors who support diversity and inclusion, and to supply the identical support to others entering this field. Building inclusive communities in the tech industry is a robust approach to share experiences, advice and encouragement.
What advice would you give to women wanting to begin working in the AI industry?
For women entering the sphere of artificial intelligence, my advice is threefold: continuously seek knowledge, because artificial intelligence is a rapidly evolving field. Take advantage of networking because contacts will open doors to opportunities and offer invaluable support. Support yourself and others, because your voice is important in shaping an inclusive, equitable future for AI. Remember that your unique perspectives and experiences enrich the sphere and drive innovation.
What are probably the most pressing issues facing artificial intelligence because it evolves?
I consider that some of the pressing issues facing the evolution of artificial intelligence isn’t to be misled by the most recent hype cycles. We’re seeing this now with generative AI. Sure, generative AI is a major advance and may have a huge effect – good and bad. However, there are other types of machine learning in use today that secretly make decisions which have a direct impact on every person’s ability to exercise their rights. Instead of specializing in the most recent marvels of machine learning, it’s more vital that we give attention to how and where machine learning is being applied, no matter its technological prowess.
What issues should AI users pay attention to?
Users of AI should pay attention to issues related to data privacy and security, the potential for bias in AI decision-making, and the importance of transparency in the operation of AI systems and decision-making. Understanding these issues can enable users to demand more accountable and fair AI systems.
What is the very best approach to construct AI responsibly?
Building artificial intelligence responsibly requires taking ethical issues into consideration at every stage of development and implementation. This includes diverse stakeholder engagement, transparent methodologies, bias management strategies and ongoing impact assessments. Prioritizing the general public good and ensuring the event of AI technologies which might be grounded in human rights, equity and inclusion are essential.
How can investors higher promote responsible AI?
This is a vital query! For a protracted time, we never talked explicitly concerning the role of investors. I am unable to express enough how influential investors are! I consider that the claim that “regulation stifles innovation” is overused and infrequently unfaithful. Instead, I strongly consider that smaller firms can gain a late-moment advantage and learn from larger AI firms which might be developing responsible AI practices and guidance from academia, civil society and government. Investors have the ability to shape the direction of the industry, making responsible AI practices a critical factor in investment decisions. This includes supporting initiatives focused on addressing societal challenges through AI, promoting diversity and inclusion in the AI workforce, and advocating for strong governance and technical strategies that help ensure the advantages of AI technologies profit society as a complete.