A letter from WWC Managing Director Rochelle Haynes
Dear City Leaders,
Artificial intelligence will be a top issue to watch in 2024. That message was clear as we at Bloomberg Philanthropies What Works Cities Certification recently evaluated local governments’ data practices and Certified 12 new cities as among the world’s best at using data to deliver results for residents. Already, Certified cities are using AI to change how they’re run: Austin uses it to combat wildfires, New Orleans to improve traffic safety, and Seattle to reduce emergency vehicle travel time. The federal government is getting in on the action too. Inspired by President Biden’s executive order on AI, the National Science Foundation is piloting new collaborations between tech companies and federal agencies to develop cutting-edge AI applications.
These developments are not only changing the way government works. AI is also transforming how doctors diagnose illnesses, serving as digital assistants to write memos and emails and improving facial recognition for security and verification. Some people are excited by the incredible potential of AI, while others fear that these new tools will leave millions of workers unemployed and put dangerous biases on autopilot.
AI’s potential to improve residents’ lives and transform city operations
I, like many other leaders, am eager to embrace AI‘s potential to help cities collect data, manage information, and aid decision-making. With intentional use of AI, city leaders can reduce repetitive tasks and make government operations more efficient, enabling time and resources to be invested back into communities to address resident needs.
While I am excited about the promise of AI, there are clear risks if its usage is not governed properly. As city leaders this year go deeper into exploring uses for AI, here are three things they should do to empower civil servants and protect residents.
First, create clear guidelines. For city staff, using AI presents many quandaries, from protecting residents’ privacy to navigating public records requests. City staff need a clear roadmap on how to use these tools responsibly and in a way that leads to improved outcomes for residents. Increasingly, we’re seeing Certified cities such as Boston and Seattle lead the way in developing policies around using AI responsibly. Recently, Tempe’s city council passed an Ethical Artificial Intelligence Policy and San José established guidelines for staff on using generative AI tools that acknowledge the potential benefits as well as the bias and privacy issues. As San José Mayor Matt Mahan recently shared with colleagues at the U.S. Conference of Mayors, these guidelines freed employees to experiment with AI. “By creating a basic set of guidelines we gave to staff permission — you can use it, and you should use it,” Mahan said. “But here’s how to do it safely and responsibly.”
Second, create community feedback loops. City leaders can’t develop these guidelines and begin experimenting with AI in a vacuum. It’s critical to continuously collect feedback from everyone involved, including residents. Robust community engagement will help gain buy-in and fuel long-term success that doesn’t lift up a few communities while leaving others behind. One way to do that: Create a city-wide task force made up of diverse voices to assess new uses of AI. As the technology grows more sophisticated, such groups can ensure that its use in local government aligns closely with public values.
Third, conduct high-quality audits of AI policies and practices. The risk of AI amplifying existing biases in health care, policing, education and more is too great to be left to chance. Cities must regularly audit uses of AI to ensure the technology is being used in a fair and equitable manner. New York City’s recently-enacted law and accompanying rules requiring transparency and bias-checks around the use of AI in hiring and promotion is a good start. Similarly, San José publishes a register of all the ways AI and algorithms are used in local government. Regularly taking stock of AI’s uses and impacts in this way won’t just protect residents from its worst downsides; it also will keep cities plugged in to how the technology is evolving so they can take full advantage of it.
At this point in our emerging understanding of AI, we have more questions than answers. I encourage all of us to lean into this gray space and use our collective thinking to create common definitions and guidelines for making the most of AI while minimizing harm. This is potentially a watershed moment for our field. AI can supercharge the way local governments manage programs, measure performance and use data to make smarter decisions about how to improve residents’ lives — if we’re smart about how we use it.
I look forward to navigating this exciting time with all of you and to fostering more thoughtful conversations as we explore the power of AI together.
Sincerely,
Rochelle Haynes
What Works Cities Certification Managing Director