Artificial Intelligence programs are completely changing how people view computers. Once lifeless machines, AI systems are allowing for computers to analyze information in ways once thought impossible. Such a revolutionary trend has many implications, especially within government. With that in mind, what can the modern agency get out of AI deployment?
Prison reform
One of the main problems facing the U.S. right now is the country's incredibly high prison population. According to the American Civil Liberties Union, around one in 110 American adults are currently serving time, whether it be in an actual prison or simply a local jail. This is the highest per capita prison population in American history. On top of that, around one in 35 U.S. citizens are in some way caught in the correctional system, which counts both prisoners as well as those on parole. For a developed, first world country, these kinds of statistics are troubling to say the least.
Putting aside the moral aspect of imprisoning such a high percentage of the population, this trend is costing the U.S. taxpayers quite a lot of money. According to the Prisons Bureau, the average cost of an American prisoner is $30,619.85 per year, which covers everything from housing to the guards within the facility.
Although this problem needs to be tackled on multiple fronts in order to create a viable long-lasting solution, AI systems are increasingly being seen as a means of lowering the current American prison population while still ensuring that justice is carried out. In a conference reported on by Government Technology, Senior Advisor to the U.S. Chief Technology Officer for Criminal Justice Lynn Overmann stated that one of the main problems facing those within the prison system is bias.
One example Overmann gave is how sentencing works within drug cases. She stated that while drug use between African Americans and Caucasians were almost identical, African Americans tend to get longer prison sentences. This group is also more likely to be arrested for these kinds of crimes, showing that racial bias can be found both within and outside the courtroom.
This is why Overmann is pushing so hard to innovate AI systems to the point where they can be integrated into current prison policies. Machines know nothing of racism or sexism, and would be able to eliminate this bias from sentencing.
Improved cybersecurity
We've previously discussed in length the major problem facing government agencies in terms of acquiring cybersecurity talent. Secretary of Homeland Security Jeh Johnson has openly stated that multiple departments simply cannot get computer experts to make the jump to the public sector. There are a lot of reasons behind this, but at the end of the day, the U.S. simply isn't as secure as it should be.
Again, this problem will need to be attacked from multiple different angles in order to create the best solution, but one of these focuses very well could be an increase in the use of AI cybersecurity systems. Another major benefit machines have over people is that they are incredibly good at reading long, boring hunks of data. Keeping up-to-date with the most current cybersecurity trends is vital to mitigate the risks of a breach, but humans simply aren't suited to look over all the information gathered in the wake of every single attack.
This is why MIT's Computer Science and Artificial Intelligence Laboratory teamed up with the private company PatternEx to develop an AI system that would do just that. The current machine that these teams have built looks over more than 3.6 billion lines of code relating to cybersecurity incidents. In doing so, the system learns how hackers are accessing private data and what can be done to prevent them. In fact, this machine is so good at its job that it currently has an 85 percent success rate.
Of course, there are some obstacles standing in the way of cybersecurity AI systems being widely deployed. First, government agencies are hit with some of the most advanced cyberattacks currently known. This is because foreign nations with highly-trained individuals and immense amounts of money to spend are generally behind these attacks, and any AI system would need to be top of the line.
Second, the importance of national security must be stressed. Giving an AI personality the reins to government cybersecurity scares a lot of people and conjures the image of a real-life Skynet in their heads. These worries must be allayed through strict governance of what the AI system can do and what kind of information it tracks.
The White House is already making moves
Clearly, the government has a lot to gain from working with AI systems. In fact, the White House has begun moving certain duties into this technology. Deputy U.S. Chief Technology Officer Ed Felten stated that the Obama Administration has already recognized the benefits of AI systems, integrating these programs into current services.
Felten stated that the White House is taking major steps forward for American healthcare. Two programs, the Precision Medicine Initiative and the Cancer Moonshot, are set to increase efficiency by integrating AI procedures into current services. The first is designed to create a more personalized experience when visiting the doctor. Current medical models force doctors to treat a patient's symptoms based on what the average person needs. Utilizing an AI program to tailor this experience could allow doctors to make more informed decisions about what each specific person requires. The Cancer Moonshot, on the other hand, could benefit from AI systems through better collaboration and an increase in data analytics concerning current cancer knowledge.
That said, Felten stated that the government still has a few hurdles to jump before AI programs can be integrated further into agency actions. One of the main problems discussed within his piece is that it's very hard to predict how AI systems will act. Perhaps the best example of this is Tay, an AI personality developed by Microsoft. This program was meant to learn how to Tweet like a real person by observing and reacting to people on the social networking site.
The problem with this is that Tay developed a personality based on what people Tweeted at her. This caused a major controversy when a large group of people taught Tay to be racist by sending her hurtful messages on a massive scale. Although this was little more than a prank, the implications of this are big. AI experts need to develop means for ensuring these systems can't be taught to "think" a certain way by outside forces if this trend is ever to generate any steam.