The Smart Cities movement, which looks for ways to find data-driven technological solutions to everyday urban challenges, is increasingly turning to artificial intelligence to deliver "services" to its residents—everything from locating gunshots and finding tumors to dispatching work crews to pick up trash.
New York is one of about 90 cities worldwide that uses a system called ShotSpotter, which uses a network of microphones to instantly recognize and locate gunshots. In Moscow, all chest X-rays taken in hospitals are run through an AI system to recognize and diagnose tumors. And Taiwan is building a system that will be able to predict air quality, allowing city managers to warn residents of health dangers and work to lessen what the data tells them will be the worst of the impacts.
What constitutes a "Smart City" isn't well-defined. In the broadest sense, a Smart City is one that uses electronic means to deliver services to its residents. But if you dig down even a little, delivering even on that simple promise of service delivery can be exquisitely difficult. For example, Smart City technology might strive to eliminate the need to call up your alderman to complain that the streets aren't getting plowed.
Instead, a network of sensors—yes, an Internet of Things—would know when the snow is falling, how much has fallen, where the snowplows are, when they've last been on your street, and when they'll be there next. All of that would be delivered in a browser or app to anyone who cares to either dial in or build their own information utility using that freely available data.
Of course, you'll need a communications infrastructure to allow all those sensors to talk to each other and a central database, as well as application programming interfaces and data dictionaries to let the snowplow data be accessed by other services—such as the fire department, which could use that information to better position its ambulances in bad weather.
Oh, and because this is the government, doing it on a tight budget, securely, and with maximum up-time are all design goals, too.
And that's just one application. Consider all the functions that a municipal government provides, and it becomes readily apparent why no city is completely "smart" and how artificial intelligence and machine learning could readily get applied to the Smart Cities movement. Thus, the latest catchphrase of "Smart City" technology hawkers is AIoT: Artificial Intelligence incorporated into the Internet of Things.
Inevitable tensions, however, have sprung up between AI/ML and the Smart Cities movement. One of the hallmarks of Smart Cities is the maximal openness and availability of the data that's collected to make a smart city possible. Chicago, for instance, publishes its government data, as do New York, Barcelona (here in its English version), Moscow, and the island nation of Taiwan. But AI and ML algorithms are obscure by their nature, not necessarily something that a councilman or community organizer can readily understand. Political processes in every jurisdiction reflect local customs, needs, and desires, any of which may include levels of scrutiny for, among other values, fairness in the provision of services.
An AI that learns to send police to certain neighborhoods faster than others—or is suspected of doing so—is not an AI that would survive a political process. The politicians who put such a system in place would be unlikely to survive, either. The Smart Cities ideal—at least, under non-authoritarian regimes—is to use all that data to provide services better and more efficiently, not to engender urban dystopia. Expect AI systems in Smart Cities to make recommendations to actual people.
Spotting shots and tumors
Take the ShotSpotter system used by New York City and Washington, DC. Studies have shown that only one out of every eight incidences of gunfire is reported to authorities.
ShotSpotter uses networks of strategically placed microphones to listen for gunshots.
When the mics pick up a noise that matches the signature sound of a gunshot, the ShotSpotter system triangulates a location and sends the recording and associated information to a human being who decides whether the sound was, in fact, a gunshot.
Within 60 seconds of hearing something, ShotSpotter's sensors can report the latitude, longitude, and altitude of the shot, how many shots there were, and the direction and speed of the bullet's travel. In New York's implementation of ShotSpotter, information about confirmed shots fired is merged with the city's own address database (because not all locations have a street address), surveillance video, crime and shooting histories for the location, and the name and pictures of anyone with open warrants at that address, as well as any gun permits issued in that area. Police responding to the call have all that data on their computers or tablets by the time they arrive at the scene.
In all, 60 square miles of New York City—about 20 percent of the city—are covered by ShotSpotter. To prevent the possibility of vandalism, the exact areas covered and the locations of the sensors are both proprietary—even the NYPD doesn't know. The city says it's responding to four to five times the number of gunshots than it was before ShotSpotter was implemented and that it has been able to match guns and bullets that are recovered with other open crimes.
The solution is not perfect. ShotSpotter's architecture means it can only cover areas of about three square miles or larger, so violent pockets smaller than that require different, more traditional solutions. Also, ShotSpotter, not the city, owns the system, and the contract is up in 2021. Still, the police department sounds very pleased with the value it's getting for the money.
Things are a little different in Moscow
In the somewhat clubby world of Smart Cities, Moscow's efforts have long been a little obscure. A recent report from the International Telecommunications Union details the city's efforts. Those include a Unified Medical Information Analysis System, which directs people to the nearest of the city's 678 clinics and maintains e-health records for 78 percent of the city's residents. But the most interesting AI-related project going on in Moscow is a pilot project that reviews all MRI and CAT scans for precursors to lung and breast cancers.
Information about the project is scarce. But as of last year, said Andrey Belozerov—strategy and innovations advisor to Moscow's CIO and Moscow's point person for its Smart Cities efforts—the city has scanned more than 6,000 images and claims that it has picked up 225 cases that a previous process had missed. Belozerov says that Moscow is participating in the RadIO open source project of data science regarding CT scanning on GitHub.
Another pilot uses Moscow's medical data and school attendance data to predict virus outbreaks in schools. Based on AI-processed information, the city says, schools can tell students to stay home, thereby preventing full-bore outbreaks.
Breathing the air, and taking a pulse
Taiwan has an air-quality problem, and the earlier the government knows how bad the air will be and where, the safer its residents will be. The goal is to predict three days out the level of particulates and ozone, then produce an eight-hour warning of high ozone levels that would advise people to curtail outdoor activities. As early as 1993, the Taiwanese government started putting up air-quality monitors that measured particulates and ozone levels. Today, about 140 such stations can be found around the country.
Mike Lee, executive vice president of the Taiwanese telco FarEasTone, told a panel at Mobile World Congress in 2018 that Taiwanese weather authorities took a historical data model from London and modified it, iterating until it was predictive for them. The weather stations collect 60 parameters and add that population-density data gathered from the carrier. (It turns out that where there are people on any given day, there's more pollution.) The results get rolled up to AQI scores presented on a live map by Taiwan's Environmental Protection Agency, as well as other online charts. The data is used to generate three-day predictions of air quality.
Public health is also the goal behind another ambitious AI-based project in New York City: the Syndromic Surveillance System. SSS has its roots in a 1990s-era project to gather information about flu-like symptoms from 911 calls and send it to the city's health department. Data sharing like that was extremely uncommon two decades ago, but the 911 data proved to be insufficiently specific to be helpful.
Over the years, though, the system evolved to collect far more detailed and health-relevant information from emergency medical responders, hospital emergency rooms, drug stores that report both prescription and over-the-counter drug sales, and visits to the school nurse. That data is scooped up every day and thrown into data models in an effort to spot illness trends.
Health in the Big Apple
Today in New York City, 75 percent of EMS calls and school nurse visits, 10 percent of prescription drug sales, about a quarter of over-the-counter drug sales, and all emergency room visits are reported to the city's health department, down to the ZIP code. That means, for instance, that if there are 200 reports of diarrhea coming in from emergency rooms in three ZIP codes in Queens, the word will go out the next day that a food-borne illness may be lurking around there. If nurses in a couple of schools in the Bronx are seeing kids with fever and the local Walgreens are suddenly selling stacks of Aleve, that may be an early indicator of flu. And if cigarette taxes go up, the health department can track the impact on cigarettes and nicotine gum.
Because the vocabulary of health data is fairly restricted and standardized, the Syndromic Surveillance System can detect incidences of 22 communicable diseases and about 25 non-contagious health conditions, such as asthma, gunshot wounds, or use of synthetic cannabinoids (known as K2 or "spice"). In this last case, an outbreak of nearly 100 K2 overdoses in the summer of 2018 was quickly identified. The sources were shut down within days, thanks in part to this system.
Not every AIoT application is as glamorous as those. Las Vegas, for instance, is using AI/ML to keep the streets clean. The city has rolled out an AI system with video cameras in two parks near the Downtown neighborhood.
Rather than sending clean-up crews out on a regular schedule, work orders get generated only when the automated systems see trash and graffiti. The system doesn't look for people who dump the trash or write the graffiti; it's just interested in the garbage itself. The city hasn't quantified the savings yet, but it has noticed one behavioral change: the workers who do the clean-up seem to prefer having a routine schedule to being called to the sites only when they're needed.
The population of Vietnam's Ho Chi Minh City has grown rapidly this century, and overbuilding has become a problem. The city is working with the World Bank to teach an AI to look at satellite imagery of the metropolis and recognize land cover and land-use patterns. This data will be used in city planning and management. Parks don't require much in the way of city services, but dense housing does. Moreover, development in areas that aren't zoned for it guarantees that those areas will get less of the city services they need. The geospatial data is overlaid with administrative information—district boundaries and the like—to give a picture of what's really going on. On-the-ground surveys are not always reliable, but it's hard to hide from a satellite.
AI and ML are not panaceas for smart cities, because cities are not about technology; they're about the people who live in them and the systems that deliver them services that they need. At best, the technology can help dig through the vast repositories of data that a modern city generates, helping humans make decisions that affect other humans. For now, at least, people are still at the controls.
This article originally ran on arstechnica.com.