Technologies of Future Governments and Electorates: Artificial Intelligence

Foreword

We live in an exciting period of accelerating technology, where computational power and algorithmic sophistication appear to be increasing exponentially with few ends in sight. Open source data analysis platforms like R and scikit-learn are making cutting edge techniques faster and more accessible. Big Data Hackathons bring all levels of interested individuals together to figure out complex relationships in large data sets. As a student of math and data science, I am increasingly enthusiastic about the potential for new technologies to transform the way we think about and interact with government. But as a practitioner of big data and algorithmic trading, I remain concerned with the possibility of misuse— whether intentional or not.

We must be careful to always pay attention to the man behind the curtain. Otherwise we could easily end up in the situation where instead of finding out the best ways to accomplish our goals, we unquestioningly allow our technologies and models to tell us which goals we ought to accomplish.

Accelerating Democracy

Accelerating Democracy: Transforming Governance Through Technology is an engaging new book by John O. McGinnis, the George C. Dix Professor in Constitutional Law at Northwestern University. In Accelerating, McGinnis discusses several major areas of emerging technologies and assesses their respective potential impacts on the ways people elect representatives, and the ways those representatives will make laws.

As an enthusiastic proponent of using technology to enhance electorates and governments, I wanted to avoid significant confirmation bias, so I approached the book with criticality and skepticism. This led to a lengthy critique which I will use both as a review of the ideas in the book as well as an opportunity to introduce some of the things I’ve been thinking about.

Overall, Accelerating is a great primer. I’d recommend the book to the more academic reader who is also very interested in the subject matter. You wont find any cute Gladwellian narratives; the book is dense in citation as McGinnis tries to develop multiple arguments in a small amount of space. This necessitates more critical thinking than if you were reading a typical political non-fiction. And because it’s a synthesis of many different fields of research, you should be prepared to check out a lot of the source material for yourself.

Artificial Intelligence (AI)

The book’s biggest weakness is the chapter on artificial intelligence. McGinnis proposes that AI has great potential to improve governance. I agree. However, Accelerating is biased by a single school of AI thought. Because of the seeming homogeneity of the population working in AI, McGinnis was likely unaware of the existence of this bias. As a result, I think the book unfairly dismisses other branches of AI.

The bias goes like this:

McGinnis promoted Accelerating through guest-blogging on the popular law blog The Volokh Conspiracy, which is maintained by a consortium of professors at George Mason University. Fellow GMU professors, economists, and bloggers Robin Hanson and Tyler Cowen are frequently cited throughout the book. Those two are also frequent presenters at the Singularity Institute’s annual conference on AI and transhumanism. McGinnis also heavily cites futurists Ray Kurzweil and Peter Thiel—cofounders of the Institute. Finally, McGinnis’ only source of contemporary work in AI comes from a controversial concept called Friendly AI (FAI), created by another cofounder of the Institute, Eliezer Yudkowsky. McGinnis accepts the ideas from this insular community at face value—thereby not only excluding work of other prominent artificial intelligence researchers, but also excluding the benefits of entirely different fields of AI.[1]

The AI chapter focuses almost exclusively on the concept of Strong AI (aka Artificial General Intelligence), which McGinnis describes as “a general purpose intelligence that approximates that of humans. Strong AI has the capacity to improve itself, leading rapidly to machines of greater-than-human intelligence.” Then without significant explanation, McGinnis declares a one-size-fits-all solution: the correct policy for AI is “substantial government support for Friendly AI.” This is wrong.

The Problem with Faith in Friendly AI

Friendly AI is a specific type of Strong AI research in development at the Singularity Institute. The goal is to create mathematical proofs that for an AI that will “protect humans and humane values.” At first glance, I can see why it might be easy for someone to be enticed by the idea of FAI. It sounds pretty good. McGinnis writes, “Friendly AI can be defined broadly as the category of AI that will not ultimately prove dangerous to humans.” Unfortunately, in practice FAI doesn’t quite work like that; in fact, it might not work at all.

The first concern is the development time—the median estimate for Strong AI is 2040. Optimistically, McGinnis writes, “even if strong AI is not realized for decades, progress in AI can aid in the gathering and analysis of data for evaluating the consequences of social policy, including policy toward other transformative technologies.” This would make sense only if the people researching Strong AI were producing other AI tools along the way. But rather than developing actual software with end users in mind, the bulk of FAI research is in mathematical logic—concerned primarily with theoretical problems like “recursively self-improving Goedel Machines.” This makes it highly unlikely that FAI will produce any advancements in data analysis.

Further, FAI might be powerless against human ill wills. Researcher and Institute adviser Ben Goertzel disagrees with the dogmatic belief in the necessity of a provably friendly AI because he believes the argument’s premises are unconvincing. Aside from the inestimable probability of rapid self-modification leading to an uncontrollable intelligence (Goertzel calls this a “hard takeoff”), it is equally as likely that early strong AIs will be completely subservient to humans. He’s far more concerned that “some nasty people will take the early-stage AGIs and… use them for bad ends.” This implies that the existential threat is human-driven, meaning FAI has no control over it.

Most chillingly, computational physicist and AI researcher Dr. Hugo de Garis believes that FAI is a “dangerous delusion.” He explains that even if a mathematical proof exists for a self-modifying source code to maintain friendliness, an AI will have natural mutations in hardware circuitry due to routine environmental factors like cosmic rays, which pose a problem because “its mutated goals may conflict with human interest.” Therefore, we cannot rely alone on the narrow vein of FAI research to produce a safe construct for Strong AI when there is a serious prospect “that humanity will be drowned by vastly superior [artificial intelligences] who may not like human beings very much, once they become hugely superior to us.”[2]

I believe Strong AI is an inevitable far future, but it’s unclear if we should favor support for FAI given the low probability that it will fulfill its goal of guaranteed friendliness. Moreover, since it will not provide any value to governance for decades, we shouldn’t exclude the other fields of AI which do promise incremental benefits. We need to be far more comprehensive in our approach to AI than what McGinnis’ Institute bias suggests. We must identify specific goals that we want to accomplish across varying time lines, and then provide research opportunities for statisticians, computer scientists, data experts, and computational physicists working on a multitude of different solutions.[3]

A Better AI-dea [4]

Machine learning (ML) is a different branch of AI research that offers more potential for educating electorates and increasing government efficiency in the immediate term while offering significantly less risk of wiping out all of humanity. ML is concerned with allowing computers to discover optimal solutions to clearly defined problems. It performs well when there is a finite universe of inputs and an answer space that has a fitness function. Over the past couple decades, it has given us many great things like speech recognition, spam filtering, and self-flying space craft—all without the intention of ending up at Strong AI.

Let’s go through an example of how we can proceed immediately with extremely helpful AI research long before we make any decisions about supporting Strong AI. We ask the question: Which candidates are most closely aligned with my beliefs? This question is ripe for a machine learning solution. There is a clearly defined answer space, good information about candidates’ beliefs, and arguably decent information about our own sets of beliefs.

Enter iSideWith, a political-compass website unveiled for the 2012 election that attempts to map your personal stances to actual candidates. The survey asks a series of multiple choice questions with an additional Likert scale “importance-meter” corresponding to each question. At the end, it uses an ML algorithm to calculate how well your preferences match to the presidential candidates. Previously, political matchmaking websites focused solely on broad party ideology and never delved into individual candidates’ stances.

SH_24 Jul. 06 15.49(From iSideWith. In drafts I had difficulty making this not look like an ad. So I added this line)

Obviously a full ballot is much more than just the presidential candidates. Currently, the guys at iSideWith are working on databasing the contenders for the 2014 Senate and House races. Eventually, a voter will be able to find out who they match with at every level down to their specific voting district. But determining who you side with is merely the first (and probably least interesting) application machine learning has to offer with respect to mapping a voter’s preferences to a full ballot. If you were to vote for every candidate who has recently espoused your particular set of beliefs, how well would this actually predict an outcome you would enjoy?

So let’s ask a more specific question: Given my set of beliefs and goals, what is the optimal way to fill out my ballot? We still have clearly defined inputs with a finite combination of answers. But now we’re able to incorporate all sorts of other predictive information.

One of the great potentials for ML is that it can discover less obvious relationships between variables—like how a historical voting record affects future behavior, or how differing combinations of democrats and republicans are more or less likely to pass certain types of legislation. If you want to maximize the chance that your belief system is realized, the holistic analysis of a complete ballot will likely lead to candidate suggestions that differ greatly from analysis of individuals’ political stances. Your personalized Optimal Ballot could target any combination of your desired goals.

Now imagine if this were an app where you could: define your goals, answer a series of questions, save your optimal ballot to your phone, and then take it with you to the voting booth. You would receive tremendous value. And there would be essentially zero cost of learning about the different national, state, and local options for houses, senates, judges, sheriffs, bills and propositions. This will mark a significant milestone in the efforts to increase voter intelligence as well as decrease the cost of being informed. In this solution, I would argue that the indifference curve of the Rational Ignorance Model is nearly solved.[5]

Policy?

If McGinnis’ goal was to persuade the reader that we should pursue a policy that primarily supports FAI, then Accelerating falls far short. Personally, I’m still unclear about how we should approach Strong AI, so I won’t attempt to make any recommendations at this time. However, as I have just shown, there are solvable problems within our immediate grasp. If McGinnis wants to see progress in the analysis and gathering of data without increasing existential risk, far more grants should be given to machine learning research to actually produce usable tools.[6]

Forward

Update: In two future posts I will write about information distribution and prediction markets. I had originally intended those sections for this article but I want to make sure you have something cohesive to read. (Evidence that I really like to think things through: this post has 1,000 revisions — about one for every three words)

Thinking forward, it looks like there is a natural roll-out order to the different elements mentioned in this essay.[7] Getting clean information into human and machine readable formats is the foremost goal. Shortly thereafter comes obtaining good arguments with expert evaluations and citizen polling. Then the community (or non-expert) opinions can begin to be introduced at various levels. Prediction markets are then introduced using various restrictions on participation and mechanisms for payoff. Once there is a sufficiently large community, experiments in analytics can be conducted to tune machine learning models.[8]

And while these technologies are very promising, we previously agreed that there is a problem. Government has few incentives to change itself—informed electorates and more efficient governments are unlikely to come about as a result of lawmakers. Unlike McGinnis, I am hesitant to prescribe any solutions that involve reliance on increasingly corrupt and corruptible governance. My goals involve designing tools and thinking about information systems agnostic of public sector support.

But our differing sentiments are most certainly a function of our respective fields. My coworkers will likely note that my favorite hammers happen to be Genetic Algorithms and Random Forests, while throughout Accelerating McGinnis considers the impacts of laws and court decisions on information quality and distribution. For a professor of constitutional law, it would certainly be convenient if we could legislate our way to a better future. Accelerating doesn’t convince me that we can. It looks like technological innovation will lead to more effective legislation, not the other way around.[9] But in the end, McGinnis leaves me optimistic about the potential for a symbiosis between technologists and legal scholars.

There’s a lot of work to do.

Footnotes    (↵ returns to text)

  1. At the time of Accelerating‘s publishing, the Singularity Institute has been renamed to the Machine Intelligence Research Institute. And despite this essay’s oppositional tone, let it be known that I’m pretty fond of the Institute. I had the pleasure of attending the 2011 conference in NYC and it was through meeting with members that I was inspired to finally begin fleshing out some of my ideas for improving government. However, mathematician Cathy O’Neil is highly skeptical of the whole show (link)
  2. A couple people who gave me notes told me that it sounds like I’m talking about Skynet from The Terminator movies. So allow me to apologize if you, dear reader, are put off by this idea. I most certainly have taken for granted that you have some prior knowledge of the concept of The Singularity. But in the case that you don’t, I would encourage you to do some reading (link)
  3. I don’t think McGinnis would disagree with this answer. It’s possible he was using the term Friendly AI incorrectly—not realizing it has very specific implications (despite him quoting directly from Yudkowsky). But nonetheless, Accelerating does overlook every other tangible field of AI research.
  4. #punny
  5. At a later date, I will discuss the Malthusian prediction of that model and how technology will once again save the day
  6. And if we’re handing out money for that, I’ll take some
  7. “The first thing we do…”
  8.  And then way down the line, Strong AI will either kill us all or launch us into perpetual prosperity.
  9. With respect to the policy suggestions in Accelerating anyway. On patent policy, Alex Tabarrok has a pretty interesting argument (link)