Metamemory, artificial intelligence and other brain teasers


Author: Melina Devoney

Your brain is jumbled in the midst of a hectic school day as you make a beeline to the library. You pass by a professor in the Quad and suddenly remember that you want to schedule a meeting with him to edit a paper.

How did that piece of information weasel its way to the front of your clouded brain when you recognized his face?

Cognitive science Professor Justin Li studied the cognitive processes that may contribute to this information selection while earning his PhD in computer science at University of Michigan.

In his first year teaching cognitive science at Occidental College, Li is developing a research project that continues his graduate research in a realm of artificial intelligence (AI) called cognitive architectures. In this field, structures of the human mind are analyzed and constructed in a computer model, creating a framework to build AI.

According to Li, certain processes in the brain are essential to create human-level AI. Specifically, decision-making based on selection of discrete memories and knowledge is a process that Li thinks could be translated into a computer model. First though, he must decipher how the brain selects useful knowledge and memories pertinent to the situation.

Because it is difficult to verify processes in the human brain based solely on neuron activity and cognitive science, Li said that computer science can be utilized to test theories.

“If you have a good enough theory, you should be able to turn it into a computer code and run it in a computer,” Li said. “That will tell you if it’s correct or not, or whether it is feasible.”

Li’s interest in computer science was sparked as he watched his mom create a website in his middle school years. In high school, Li started computer programming and continued computer science through graduate school. As an undergraduate at Northeastern University, Li was also interested in psychology in and outside of the classroom.

“I read a lot of books on—how I like to think of it— ‘How to understand people’ because I wasn’t very good at it for a long time,” Li said.

He merged the two disciplines into a career. Li is the only professor at Occidental that integrates cognitive science and computer science into programming computer models of the human mind.

Li plans to attempt modeling the cognitive structure described by the scenario of the professor in the Quad.

In order to remember than you had to schedule a meeting with the professor, you have to remember that you had something to remember in the first place. Whatever this memory pathway is has not yet been translated into computers. Since neither you nor a computer could predict when you were going to cross paths with the professor, reminding yourself of scheduling the meeting is not as simple as setting an alarm on your phone. Additionally, the phone would not continuously filter through its data to check for tasks.

“[The phone] can’t just ask, do I have something to do? Because that assumes you have something to do in the first place,” Li said.

The current design scheme in cognitive architecture relies on the computer repeatedly asking if it has anything to do, or waiting for a command whenever it encounters a stimulus.

“You can always poke at computers to suggest things,” Li said. “The hard part is figuring out what to suggest.”

According to Li, current cognitive architecture design is inefficient. Ideally the computer would, acting like a human, see a stimulus and remember a corresponding task as an afterthought, “by the way, you have something to do.”

In graduate school, Li studied scenarios in which we need computers to recognize a stimulus and then offer pertinent information. Li’s goal was to disprove the common misconception that computers always know what to do.

For example, the typical iPhone user may assume that Siri is omniscient since she can set reminders, find local restaurants and give traffic updates. According to Li, she is not all that impressive.

“These are suggestions that are very limited,” Li said. “They are bound to locations, bound to things you had told it before.”

Siri can only perform tasks based on information in her user’s iCalendar and email. Siri performs well when told exactly what to do, but when asked to problem solve, she draws a blank. This is because a computer cannot choose pertinent knowledge necessary to make a decision on its own, especially if that knowledge includes human goals, reasoning or random knowledge in a database.

Li used an analogy to explain the brain’s method of stringing together goals, reasoning and applicable information: One morning, you pour the last drop of milk into your cereal bowl so you decide to buy more after class. Later, you walk by an unfamiliar Irish grocery store. You wonder whether you should go in because you are unsure if it carries milk. Suddenly, a fact pops into your head: Ireland leads the world in milk consumption per capita. With that fact in mind, you assume the store sells milk and you go in.

“Why did the fact that the Irish drink the most milk pop into your head?” Li said. “That’s a fact that leads to actions, but by itself it’s just a fact.”

Given all the facts in a human brain, how does it figure out which facts are relevant at the time? Li strives to decipher this phenomena in order to program a computer that can make human-like decisions.

Is it even possible?

“I don’t know,” Li laughed.

According to Li, cognitive scientists do already know of general ways that people use to remember facts and knowledge.

One of these tools is called meta-memory—the brain’s knowledge about its own memory system.

To exemplify this, Li asked me to name the capitals of certain countries: “United States?” he asked. That was easy: Washington D.C. “France?” Paris. “Mexico?” I paused for a second before I remembered Mexico City. “Portugal?” Another long pause, then I gave up. “Sri Lanka?” I gave up instantly.

Li pointed out that I knew the U.S. capital instantly and I had to think a little harder to name Mexico City, but I instantly knew that Sri Lanka’s capital was a lost cause.

I persevered until I pulled Mexico City out of my brain but immediately dismissed Sri Lanka because I subconsciously knew that I never knew the capital of Sri Lanka. This is meta-memory.

“You know something about whether you know something, without actually knowing the answer,” Li said.

Humans have a lot of strategies like this for conscious and unconscious decision-making that Li wants to get a computer to do by itself.

Computers would then be capable of accomplishing tasks such as automatically redefining an unsuccessful Google search query or even deciphering and using a totally unfamiliar database.

In addition to setting these lofty goals, Li is working on a near-future project of building a computer science department at Occidental. The college has a computer science minor, but no major or comprehensive department. He is reworking the computer science course offerings with the hope of developing a major. He posts progress updates on his blog “How to Start a CS Department” (

According to Li, computer science is a vital field because of its cross-discipline implications.

“It’s hard to justify not knowing about computers given how often computers are useful,” Li said. “Not just in academia, but also issues of privacy on social media and issues with intellectual property.”

With all this talk of advancing artificial intelligence, I am convinced that humans have to actively work to keep up with their computer science skills—otherwise, the possibility of artificial intelligence rebellion depicted in sci-fi movies such as “I, Robot,” seems all too real.


This article has been archived, for more requests please contact us via the support system.