How to make computers think

Movies depict a future where robots can feel. But can we make them think?

In graduate school, I specialized in my favorite application of mathematics, automated reasoning — how to make computers understand the rules of logic and use it to “think” the way humans do.

The aim of this field of research is to build an efficient piece of software that can reason about the world, constructing mathematical proofs for things that are true, and disproving things that are false. Wouldn’t that be wonderful?

Isn’t that just artificial intelligence?

Automated Reasoning (AR) is a very different thing from what we know as artificial intelligence or machine learning.

Machine learning is about recognizing patterns in existing data, and then deciding whether or not a new data point belongs to an existing pattern, without understanding why, or what the pattern even means. The software is programmed to look for patterns in very specific features of the data —you can program it to look at the age, height, or educational background of a human population, or to look at pixels or edges in a set of pictures.

Your robot vacuum probably uses this kind of artificial intelligence to navigate your home, with models that are trained to recognize walls, furniture, hardwood floors or carpet. 

In AR, on the other hand, the software sets out to discover truths that can be logically derived from a given set of assumptions. Similarly, it will detect a false statement, and show you why it’s false.

We as humans understand these rules of logical derivation intuitively. They include things like:

  • If A is true, and B follows from A, then B is also true.

  • If A is true, then “not A” is false.

  • If A is the same as B, and B is the same as C, then A is the same as C.

In a nutshell, AI and AR operate in completely different ways.

Artificial intelligence: Now that I’ve shown you a thousand pictures of dogs and told you they were dogs, and a thousand pictures of cats and told you they were cats, tell me if this next picture is a dog or a cat.

Automated reasoning: Given that anything that barks is a dog, and all dogs have ears, and Rex barks, prove that Rex has ears.

A machine that thinks better than humans. And also worse.

We as humans have defined logical systems for ourselves, and we use them to prove and disprove many things. But even when our logical systems are sound, we still find ourselves arriving at the wrong conclusions a lot of the time. This is because of two main mistakes we tend to make.

  1. We sometimes think one thing logically follows from the other when it doesn’t.

  2. We sometimes start with the wrong assumptions.

AR is more or less immune to this first type of mistake. It’s capable of producing perfectly correct mathematical proofs, where each step follows from the step before it.

However, a proof is only as good as the assumptions it’s based on. If you give an AR software the wrong assumptions, you can make it “prove” just about anything you want.

It’s safe to say that AR is not as biased as artificial intelligence. We have created artificial intelligence models that systemically discriminate against women and black people based on our own biases as humans. AR, on the other hand, is based in mathematics, not in data.

But it’s important when using AR to keep the number of starting assumptions to a minimum, and give them careful consideration.

Of course, the other aspect in which machines will likely always think differently from humans is that we have context. Yes, we can prove that something is true, but we also have some intangible understanding of what that something means — whatever that means. The statements we reason about have significance to us. We reason about a world which we also experience on an emotional level and with which have an intricate connection. A connection which makes us better, and worse, thinkers.

Can we create the perfect proof machine?

The perfect proof machine doesn’t exist, and we can’t ever make it.

  1. For one, we know for a fact now that not all true things are provable. This is known as Gödel’s incompleteness theorem.

  2. The other challenge is that even the simplest of our logical systems is exponentially difficult: the time our machine may need to prove a true statement grows exponentially with the size of that statement.

  3. We know that it’s impossible to prove anything without at least one underlying, unproven assumption.

  4. In any reasonably-complex logical system, there’s always a chance that a proof machine will run forever and ever trying to prove something, not even realizing it’s not making any progress.

Even with these hard and fast limits to AR, there’s a lot we can do with it. Automated provers help us all the time verify the designs of circuits and hardware systems. Amazon uses AR to verify critical parts of its software.

Proving God

Instead of relying entirely on automated provers to answer the big questions, proof assistants help us write large and complex proofs, with humans writing the high-level outline of the proof, and an automated prover working to prove each step of that outline.

Researchers from Germany and Austria automated Gödel’s proof of God’s existence using a proof assistant, definitively proving that “necessarily, there exists God.”

Well, more accurately put: if the underlying assumptions are true, then God, by Gödel’s definition of God, exists.

For my doctoral degree, I wrote a proof consisting of over 32,000 steps, taking me three years to complete. My proof assistant software proved each of those individual steps for me, and verified that the steps made sense in relation to each other.

Proof assistants still require a great deal of manual work, but they help us find definitive answers to large and complex questions.

Previous
Previous

What I learned about imposter syndrome

Next
Next

The wonderful beauty of the Pareto principle