Ted Lieu, member of Congress, 36th District, will keynote the 24th General Assembly.

For the 24th General Assembly, the South Bay Cities Council of Governments (SBCCOG) invites experts to explore artificial intelligence (AI/generative AI) and the types of AI applications that could benefit local government, as well as potential red flags cities should consider.

As a preview to the event, South Bay Watch invited the event’s keynote speaker, 36th District U.S. Representative Ted Lieu, to answer questions about AI’s potential impacts on cybersecurity and benefits to society. In February Lieu was appointed to co-chair a congressional bipartisan task force on artificial intelligence.

You have a history of championing the cause of cybersecurity. How and why does the emergence of AI fuel this passion?
AI is developing at an incredible speed and presents a unique challenge to the government. How can we effectively regulate this emerging technology while ensuring that we can harness its benefits? We’ve already seen AI’s impact across society, from smart appliances in our homes to self-driving cars and large language models. As a member of Congress, my priority is the safety and well-being of my constituents and our nation. As a recovering computer science major, I know that AI in the wrong hands could create serious risks. It’s essential to harness the benefits of AI while mitigating those risks.

You have helped introduce a number of bills pertaining to protection from AI harm. In January, you and other lawmakers introduced the Federal Artificial Intelligence Risk Management Act, to require that U.S. federal agencies and vendors follow the AI risk management guidelines put forth by the National Institute of Standards and Technology (NIST). How will this help us proactively avoid potential AI pitfalls?
To meaningfully address AI at the federal level, the executive branch must adopt a uniform set of guidelines to protect our national security, our economy, and the rights of Americans. While this might sound like a daunting task, the hard work of crafting these guidelines has already been done. NIST collaborated with experts in the private and public sectors to develop their AI Risk Management Framework, which can help individuals and organizations mitigate AI risks. Right now adoption of this framework is voluntary, but I believe implementation of this guidance by vendors and federal agencies would help promote consistent, consensus-based practices for developing and employing AI in safe and effective ways.

Zooming in to the local level, how can cities seize this moment in time to instill similar AI protections?
I encourage cities to consider adopting the NIST AI Risk Management Framework themselves and for their vendors. When we talk about the whole-of-government approach necessary to mitigate potential risks of AI, we must include local governments. Local governments are uniquely positioned to impact the daily lives of the people they represent through a variety of essential services. How many of these services could be helped or harmed by AI? City governments must be prepared to take advantage of AI’s strengths while protecting their constituents from its potential harm.

Diving deeper into these benefits and risks, what do local governments need to be vigilant about when it comes to AI adoption?
There are certainly ways in which AI can make local governments more efficient, but we need to be careful about how the technology is implemented and how governments operate it. That’s why I encourage organizations to implement NIST’s AI Risk Management Framework. For example, our state’s Department of Transportation is exploring ways that AI could help reduce traffic, and the Los Angeles County Department of Health Services is experimenting with using AI to combat homelessness. These applications could prove to be incredibly useful in improving local government operations and assisting the public. At the same time, governments should be mindful of potential pitfalls. Facial recognition technology is an example of a narrow application of AI that federal, state and local law enforcement agencies have deployed in criminal investigations. But the technology is less accurate for women and people of color and can be used in ways that violate Americans’ civil liberties.

What are some exciting ways city workers and constituents could directly benefit from AI’s use in local government?
AI-powered computer programs could have wide ranging benefits for cities, their employees and their constituents. Using AI tools like data gathering, predictive pattern analysis and others could assist cities to improve tasks like casework responses, constituent outreach, payroll for city employees or even service schedules for public facilities.

Replaced by bots? While AI technology is impressive, according to Ted Lieu, “We still need government employees with expertise to run government operations. Image created using Adobe generative technology.

How can local government workers get their feet wet in AI? And should they be worried about slowly being replaced by bots?
Governments already use AI for various priorities, including urban planning and transportation, among other areas. One of the most important ways to get to know AI is to see it in action. I encourage anyone who may be interested to check out AI tools for themselves to understand the immense capability of this technology. I encourage folks to experiment with large language models like ChatGPT. While the technologyis impressive, we still need government employees with expertise to run government operations.

You have discussed the importance of adopting NIST. What other actions should federal, state and local governments take in further AI policy development?
It is essential to our nation’s security that governments at all levels take action to protect their constituents from the risks of AI. From President Biden’s recent executive order on AI to city governments using AI software to automate delivery of municipal services, each elected official has a responsibility to respond to this rapidly developing technology.

Here on the federal level, I am working with my colleagues in Congress to counter urgent threats to our security. Earlier this year I introduced the Block Nuclear Launch from Autonomous Artificial Intelligence Act, which would safeguard our nuclear command and control process by ensuring that nuclear weapons can never be launched solely by AI.

At the state level, I was pleased to see Governor Newsom sign an executive order last fall to study the development, use and risks of AI throughout the state and develop a responsible deployment process within the state government. California is a leader in emerging technology, and this action will ensure that we remain our nation’s foremost hub for AI development and use.

City governments are well positioned to introduce individuals to AI technology and its uses. Whether it’s educational events at public libraries or the use of AI tools at city hall, city governments could play a vital role in the education of their constituents and the implementation of helpful AI programming. I look forward to continuing my work with the Biden-Harris administration, Governor Newsom and local officials throughout coastal Los Angeles County to ensure that our communities can take advantage of AI’s power while we work to counter its potential risks. •

Learn more and register to attend the 24th General Assembly.