Consciousness: From Memory Systems to Minds

Introduction and Problem

Some questions randomly come into your head and throw you into a thinking spiral. You think about them for days, months, sometimes years (yes, I do at least lol), and still never really get a clean answer. At some point, you even start wondering whether these questions have answers at all. I do not think we can fully answer such questions, but I do think we can try to get somewhat close to them.

I have a lot of these questions in my head, and I wanted to share one of them here. Not to offer an answer, but to start a conversation that might help us move closer to the truth, or at least make the confusion more precise.

These questions rarely appear overnight. Usually, some idea creates a small crack, and over time, that crack grows. More thoughts, readings, courses, arguments, and personal experiences keep feeding into it. Eventually, the question becomes so broad and vague that it is difficult to trace its origins, let alone frame it properly.

As far as I can remember, this particular line of thinking began with a very simple yet most difficult to answer question: what makes “humans” “human”? That question led me into many directions, but we might explore the broader “human question” some other day. In this blog, I want to isolate one question that came out of all that mess.

The content of the blog might jump here and there as I couldn’t find a proper way to structure these thoughts, so let’s treat it as a conversation. 

Cognitive psychology and the engineer’s discomfort

Last semester, I took a course called Cognitive Psychology with Dr. Asir Ajmal. In this course, we studied human mental processes using information-processing and computational models. We looked at how humans perceive sensory, visual, auditory, and other inputs, how attention and memory interact with that input, and how the brain processes it to produce behavior.

We read a lot of theories on different cognitive processes, and I initially argued with myself that these are theories and not the actual truth, of course. But if you are a software engineer like me, you tend to look at theories differently. Working on different kinds of software over the years, including image processing systems and, more recently, AI systems, I noticed that if a system consistently produces correct outputs for given inputs, we treat it as reliable. We do not need to fully understand every internal detail. We validate it by behavior. This is how black boxes work. You trust them because they work.

Cognitive psychology bothered me because it operates in a very similar way. Unlike traditional psychology, where testing is often indirect or interpretive, and a lot of extraneous variables raise all kinds of questions about the results, cognitive psychology gives us models that can actually be implemented. You can write programs that simulate attention, memory, perception, even decision-making, and then compare their outputs with human behavior.

At that point, it becomes hard not to ask: if cognition can be simulated and tested this way, what exactly is left that makes the human mind special?

A rough picture of how human cognition works

Very roughly, human cognition works like a continuous real-time processing loop. We have input systems, eyes, ears, touch, smell, and so on. These systems are constantly receiving information, even when we are not paying attention. Each input briefly exists in a sensory memory store, such as iconic memory for vision or echoic memory for sound.

Most of this information disappears almost immediately. A small portion is selected by attention mechanisms and moved into working memory. Working memory is where active processing happens. From there, information may remain briefly as short-term memory or, if rehearsed or meaningfully processed, will be stored in long-term memory.

This process never really stops. Even when we are sitting idle, the brain is not inactive. Memories are retrieved from long-term storage, brought into working memory, processed again, and combined with emotions and bodily responses. Happy memories might trigger a smile. Sad ones might trigger tears or a heavy feeling. Neurochemicals are released that bias how the system behaves next.

This entire loop, perception, memory, retrieval, emotion, and action, is what we casually call thinking. What is important here is that none of this is described in mystical terms. We have theories about task prioritization, interrupt handling, memory retrieval, and spontaneous thought. They are imperfect, but they are structured and testable. If you are interested, you should read either these lecture notes or some individual experiments and models like Kahneman’s attention model and Atkinson & Shiffrin Model.

A thought experiment

Before trying to answer what exactly is missing, let’s do a thought experiment. Imagine we try to build a machine that mimics human cognition, not in some vague science-fiction sense, where robots will take over the world, but in a very boring, engineering way.

We start with inputs. Cameras, microphones, sensors. Nothing special here. They constantly receive data, even when the system is not explicitly “paying attention”. This data briefly sits in short-lived buffers, because storing everything permanently would be inefficient and we want to mimic iconic and echo kind of memory.

Next, we add a working space. A limited area where some of this incoming data is selected and actively processed. We need a way to decide what gets processed, so we add an attention mechanism. Loud noises and other sensitive data will produce interrupts, and frequently accessed patterns get higher priority. I know we didn’t cover everything, but this is enough to get the idea.

Now we add Short-term storage for active tasks and long-term storage for information we want to keep. To make this useful, memories need weights. Recent or emotionally strong memories should be easier to retrieve. Old or irrelevant ones should decay unless reinforced. Encoding, storage, retrieval, and decoding all become part of the pipeline. To make the system adaptive, we let it learn. It updates weights based on outcomes. It stores past states and uses them to guide future behavior. Over time, it develops preferences, biases, and patterns of response.

At this point, even when no new input is coming in, the system does not stop. It retrieves stored information, reprocesses it, compares it with current goals, and prepares possible actions. From the inside, it looks like a continuous internal loop. We can even go further by modeling something like emotion, not as feelings, but as internal states that influence processing. Certain memories bias decisions. Certain states slow things down or speed them up. The system behaves differently depending on what it is “processing".

Now this is where my engineering background started to clash with my intuitions. When we build software systems, we do very similar things. We encode information, store it, retrieve it, and update it based on new data. For example, when we build memory systems for AI (the feature in ChatGPT or other LLMs that remembers who you are and what your conversation history), we embed information, store it in databases, assign weights or relevance scores, and retrieve it when needed. Functionally, the process is extremely similar to human cognition. 

In modern AI systems, especially large language models, information is represented internally, processed step by step, and used to generate outputs. There is learning, memory, prioritization, and even something resembling internal reasoning before a final response is produced.

From a purely functional perspective, the overlap with human cognition is obvious. So the uncomfortable question appears naturally: if we can build machines that do what cognition does, then what exactly is missing?

Cognition vs consciousness

At this point, we need to separate two things that often get mixed together.

Cognition is information processing. It includes thinking, learning, memory, attention, and problem-solving, all the things we talked about above. Consciousness, on the other hand, is subjective experience. It is the fact that you experience anything at all.

The hard question is not whether machines can have cognition. They already do. The real question is whether such a machine could be conscious. As we said, consciousness is the inner experience that accompanies our thoughts, but one can argue that when machines reason(LLMs), they also go through internal steps before producing an output. There is an internal process happening there, too, that we call reasoning.

Functionalists say that if the functions are the same, consciousness should follow. Others argue that machines do not actually “feel” anything. They only imitate thinking, but this raises another question: what does feeling actually mean? We can argue that in humans, feelings are strongly tied to memory activation and neurochemical changes. If a system reproduces those effects functionally, what exactly is missing?

This is why consciousness is called the hard problem. We do not know how physical processes give rise to subjective experience, or even how to test whether something truly has it.

Some answers we have to the consciousness problem

Once you take cognition seriously as an information-processing system, the question of consciousness becomes unavoidable. Over time, different answers have been proposed. None of them is trivial, and none of them fully solves the problem. But each one highlights a different aspect of what we are struggling to explain.

Physicalism

The most straightforward answer is physicalism. According to this view, consciousness is not a separate thing at all. It is simply what the brain does. Physical activities happen in the brain like the firing of neurons, and subjective experience somehow emerges from this activity. From this perspective, there is nothing mysterious in principle, and if we believe this perspective to be true, then a sufficiently advanced machine could be conscious. 

If only the physical arrangements of the system matter, then a machine processes information the same way a brain does, there is no clear reason to deny it consciousness. It also doesn’t explain how we get that subjective “experience”, why and how everyone has a subjective experience.

Dualism

The other answer to this question is Dualism, which takes the opposite approach. It argues that consciousness cannot be reduced to physical processes. There is something about subjective experience that fundamentally escapes material explanation, hence answering our previous question about physicalism. Dualism says that the mind and body are two separate identities.

I was introduced to this concept by Ibn Sina’s floating man thought experiment

Ibn Sina said that a person suspended in space, cut off from all sensory input, would still be aware of their own existence. From this, Ibn Sina concluded that self-awareness does not depend on bodily perception and therefore cannot be identical to the body. In this view, the mind or soul is a distinct entity; the brain may be necessary for interacting with the physical world, but consciousness itself exists independently of it.

Dualism explains subjective experience by separating it from matter, but raises new questions: if consciousness is non-physical, how does it interact with the physical brain? How does a non-material mind or soul influence neurons and muscles? At what point does this soul enter the body? And do animals possess it? If so, to what extent?

Religion

We have some religious explanations that try to answer this problem; they often overlap with dualism but add a theological structure. For example, in Islamic thought, consciousness is sometimes described through concepts like Ruh (spirit), Aql (intellect), and Nafs (soul, self, or even sometimes used as ego). These are not always cleanly defined, and different scholars emphasize different aspects. Different scholars try to explain these concepts differently and use different frameworks and tools. Scholars like Ibn Sina approached the soul philosophically, while others like Al-Ghazali criticized excessive reliance on reason and emphasized spiritual experience and divine knowledge. According to religious frameworks, human awareness is tied to moral responsibility and a purpose, and is some nonphysical entity granted by the divine. 

Religious explanation in this matter provides an answer to “why” but not “what”, but also raises difficult questions of its own. If consciousness depends on a soul, what about animals? Do they possess souls? If yes(as many Islamic scholars interpret), in what sense? What would be the meaning or goal for them, and how is that goal justified when a higher intellectual being exists(human) ? If not, why do they experience pain and emotion? 

Panpsychism

The last answer I found to this problem is panpsychism. I was really fascinated when I read about panpsychism, which takes a very different approach. Instead of asking how consciousness emerges from non-conscious matter, it claims that consciousness was never absent to begin with. According to this view, consciousness is a fundamental feature of reality, like mass or charge, and even the smallest particles possess extremely basic forms of experience. Human consciousness is not created from nothing but formed by organizing these basic elements into complex structures. This idea is attractive because it avoids the emergence problem. Consciousness does not suddenly appear at a certain level of complexity. It is always there, just in simpler forms.

However, panpsychism introduces a serious difficulty, usually called the combination problem.

If every basic piece of matter has some extremely simple form of experience, then human consciousness must be made out of countless tiny experiences. The problem is explaining how these tiny experiences come together. How do many separate, simple points of experience combine into one unified point of view? When you are conscious, you do not experience millions of separate sensations happening independently. You experience a single, unified perspective. Thoughts, emotions, memories, and sensations all appear together as part of one continuous experience. Panpsychism has trouble explaining how this unity emerges from many small conscious parts.

This also creates practical confusion. If everything has some level of consciousness, why does a human clearly feel conscious, while a rock does not? At what point does experience become meaningful or unified enough to count as a mind? And once we ask that, the same question extends to animals and even machines. If they are complex enough, do they also form unified conscious subjects?

Conclusion

I would love to talk about more philosophies, especially the religious philosophies, trying to answer this problem, but whatever I’ve explored so far created new questions, and as I said at the beginning, I was not here to answer this question. 

Maybe consciousness is not a problem we solve once and move on from like we usually do in the engineering world, but maybe it is something we keep circling back to after refining our questions, and for now, that is enough.

More Reading

Post navigation

1 Comment

  • It’s great that my new year begins with Umar bhai dropping banger blogs on interesting problems 😉

    As you said, Consciousness is a hard problem and we have debates around its very definition for centuries now. Who knows if Sina’s floating man experiment holds the right conclusion or the pansychic view.

    My current read suggests ‘Any consciousness at all is a sickness’, so I think I’ll stick to that, for the time being 😀

    However, For me personally, I’m more interested in how well we could teach human cognition to computers. Under all the buzz and hype around Artificial Intelligence these days, this would be the real A.I, I feel.

    Self-driving technology that many companies are working on for decades now, would be a great application of simulated human cognition.

    Companies like Tesla, Comma AI, etc believe that only cameras and a lot of recorded human driving footage could very well be the answer to achieving full self-driving, that still may be a decade away. But even if it takes longer than a decade, this sort-of work is what excites me for the future.

    This is part of the reason why I’ve started to work on my Maths because those depth-perception or distance-calculation algorithms that these self-driving companies are working on, are pure Maths and I want to understand them at the very least.

    Someday, I could see myself taking part in developing this simulation of human cognition, even if it’s limited to specific applications like driving cars at the moment. Maybe, building these specific solutions is how we get to Artificial General Intelligence.

    I’d love if you could write about technology or the approach behind self-driving cars. Your insight to the question: “Can machines be taught to drive just like humans” would be really cool for me. Your background as a Software Engineer would really complement your thoughts on the topic.

    If you want to explore the self-driving space, here’s a podcast of someone I really admire, who is trying to solve the self-driving problem. I know you’ll enjoy it at the very least:
    https://www.youtube.com/watch?v=iwcYp-XT7UI

    As always, I enjoyed your blog, even when I had to read it three times and I’m left with more questions than I had at the beginning. Classic Umar jitsu 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *