By PAUL RUFFINS

Images courtesy of Gemini
During the Feb. 18 city council meeting, Hyattsville City Administrator Tracey Douglas and Deputy City Administrator Laura Reams spoke about attending a recent conference of the International City/County Management Association, where there were many discussions about using generative AI to evaluate large data sets, increase efficiency and improve city services. Despite all the potential benefits, Douglas noted that the city is not investing any money in AI at this time. “We are in the exploration phase,” Douglas said. “This is a part of us just being transparent with the community and the council.” She also emphasized that she is aware of potential security risks and environmental concerns.
The majority of public comments concerning AI at the Feb. 18 meeting were negative in tone. Resident Daniel Broder, who said he was working on a doctorate in AI, bluntly told the council, “Do not spend money on this. There are no guardrails in place for people’s safety.” Broder noted that data centers consume huge amounts of electricity, and he strongly cautioned against making certain data available to the police.
In the days leading up to the council meeting, the HOPE (Hyattsville Organization for a Positive Environment) listserv reflected a similar apprehension. Hyattsville resident Michael Gorman wrote, “The chances that we’re going to use AI to make things ‘more efficient’ are, I would say, exactly zero.” Resident Chuck Perry, who described himself as an electrical engineer, argued that Hyattsville’s AI policy appeared to be influenced by and mirror the Trump Administration’s open door to “unchecked deployments with fewer oversight mechanisms.”
Later in the meeting, councilmembers discussed several basic AI applications they’d consider adopting. A new AI-generated closed captioning service for streaming public meetings would cost approximately the same amount as the current service ($12,000 for 100 hours), while also having translation capabilities. Another considered application would take notes during meetings and cost $8 to $20 an hour. Councilmember Emily Strab (Ward 2) of the city’s Police and Public Safety Committee pointed out that the stop-sign cameras installed to protect children in school zones use a program similar to AI.
In marked contrast, the tone of a July 2024 county council meeting was much more positive about AI. On July 2, 2024, County Councilmember Wala Blegay (District 6) and four other councilmembers introduced County Resolution 061-2024, which called for the establishment of an AI task force, partly to “build public trust in the Artificial Intelligence era.” In a separate statement, Blegay wrote, “As we embrace the era of AI, it is imperative for Prince George’s County to proactively engage with this transformative technology to enhance government services to drive economic growth and ensure a competitive edge. The Task Force represents a pivotal step towards harnessing the potential of AI to benefit our community and shape a brighter future for all country residents.”
In an interview with the Life & Times, Blegay explained that the resolution didn’t pass, but only because then-County Executive Angela Alsobrooks assured her that she was already promoting a similar idea. Blegay explained that the county probably has the highest percentage of Black AI experts in the nation because so many work for government agencies or the Pentagon.
“However, several of my constituents who love living here told me they felt they had to locate their businesses in Virginia because Prince George’s wasn’t perceived to be a hub of research development, despite being home to the University of Maryland, Bowie State University (BSU) and Prince George’s Community College,” Blegay said. Blegay is adamant about the potential of AI as a driver of economic growth and said she feels it’s critical that the area’s students be trained in this emerging technology.
Although Blegay did not go into specifics, she said she feels AI would be useful in managing traffic or improving the performance of the county’s 311 system, a nonemergency call center, which she says, can take months to respond to residents. According to the “Artificial Intelligence Handbook for Local Government,” published in September 2024 by the University of Michigan, “AI tools have been used to assist with traffic analysis to improve road network efficiency and reduce carbon dioxide emissions. This can include predicting the need for road maintenance, improving signal timing, optimizing bus routes, and enforcing speed limits.”
The handbook cites examples of AI being used successfully to reduce traffic delays in Chattanooga, Tenn., and prevent eviction for at-risk families in Los Angeles. It also details notable failures, such as one example in Michigan where a privately contracted AI service for the Michigan Unemployment Insurance Agency incorrectly flagged 85% of cases as deliberately fraudulent when the problems were instead due to simple mistakes.
Ultimately, the handbook strongly recommends against governments using generative AI to produce and release images or video that will not be inspected and approved by human beings, and suggests that AI poses the greatest risk when used for real-time decision-making by the police, noting that “AI facial recognition used in law enforcement software consistently misidentified Black faces at a higher rate than White faces.”
County resident and tech entrepreneur Vennard Wright, who is Black, said he was inspired to develop an AI systemI after a May 2023 attempted shooting on a school bus in Oxon Hill. “That’s when I asked myself whether AI could help, particularly in preventing mass shootings,” Wright said. He started PerVista, a security company which monitors video feeds and uses AI to identify firearms in environments like schools and hospitals.
“Our software has been trained on thousands of images of guns, so it has an inherent bias towards firearms, particularly long guns and assault weapons,” Wright said. He noted that clients can also determine how certain the software needs to be before it automatically notifies law enforcement. In an empty building at night, 40% certainty that a long, thin object held horizontally at eye level is a firearm might be enough to alert security. In hospitals, where many people use canes or crutches, the AI might have to be 60% certain to trigger a response.
When asked about racial biases inherent to AI used by law enforcement, Wright points out that his technology was recently welcomed in a situation where almost everyone was Black. In 2023, there was a fatal shooting during BSU’s homecoming. In 2024, the university used PerVista’s remote-controlled drones and software to scan for firearms during its homecoming football game and other events, which all ended safely, according to Wright.
Wright explained that racial bias arises when AI programmers consciously or unconsciously believe that Black or Hispanic people are more likely to be criminals and, therefore, consciously or unconsciously start training their AI programs to overfocus on race. Even true presumptions — for example, that shooters are almost always male — can cause problems if they result in AI systems not getting enough training in recognizing women with guns.
“AI systems reflect the biases, beliefs, and assumptions of the people developing them,” Wright said. “That’s why we’re not training our system to identify different types of people. We’re teaching it to get better and better at finding guns.”
