Josh Haas's Web Log

A Theory of Agency

without comments

One of the traditional contrasts drawn between Eastern and Western thought is that Eastern thought focuses on acceptance of reality as it is, whereas Western thought focuses on progress to make reality better.

This contrast has always bothered me, because to me it feels like I’d want both! I’m a believer in progress and changing the world, but I also think it’s very important to live in the now and accept reality as it is.

I’ve always been in interested in a unified theory of psychological growth that explains the role of both expanding horizons and goals, as well as deeper acceptance of reality.

I’ve been thinking about this problem again lately, and I have a theory now that I’d like to share.

The theory maps out the lifecycle of psychological growth of an agent. By an agent, I mean an intelligent, goal-directed being like a human (or an AI if we ever figure out how to program one).

Being an agent, to me, means having a mental model of the world, and having the tendency to act in ways that brings the world into alignment with certain features of the model (i.e., goals).

Psychological growth means developing a richer, more effective model of the world; it also correlates with basic human values like becoming more loving and more happy. So, this can be regarded as an exercise in amateur developmental psychology — how does a baby grow up, and progress into happy, functional adulthood?

Unlike other models such as Piaget’s stages of cognitive development, I’m less interested in what understanding is acquired when, and more interested in what the basic process of acquiring understanding looks like. In my examples I use a baby since baby’s lives are simpler, but I see this process as continuing all the way through a person’s old age as long as that person continues to grow.

Below, I’ll present the model — the fun part — and then below that, some more notes on what I think an agent is (a little less interesting, but useful for getting a richer understanding of the theory).

This is totally speculation, but it’s fun speculation, and it correlates pretty well to my personal observations of what growth feels like. If I ever try to program an AI, I’ll be keeping this theory in mind.

 

The model

 

1. Ego: A conscious being (“the agent”) achieves agency by identifying with a normative model of a segment of reality (“the system”).

The model is of the form “This is how the world works…” and it contains a built in representation of the agent’s goals via labeling different aspects of reality as “good” or “bad”.

Example: A baby knows that when it feels uncomfortable (bad), it can scream, and that will cause mommy to care for it (good).

 

2. Pain: Because the agent’s model is an incomplete description of reality, the agent is unable to reliably achieve its goals. This is the result of other elements of reality outside the agent’s model (“external forces”).

Example: The baby isn’t taking into account that screaming is tiring / negative reinforcement for its mommy, which means that while mommy does care for it, she is less affectionate (bad) when the baby screams a lot

 

3. Suffering: The agent resists the gap between its model and reality.

This is where the traditional stages of grief live.

Denial: no, the world DOES work like I think it does
Anger: dammit, the world SHOULD work like I think it does
Bargaining: okay, maybe I can MAKE the world work like I think it does
Depression: oh no, the world DOESN’T work like I think it does

Example: The less affectionate mommy is, the more the baby screams, causing a painful feedback loop

 

4. Acceptance: The agent relaxes its identification with the model, alleviating the suffering.

In Eastern traditions, this is one of the major functions of meditation techniques: the ability to perceive pain without experiencing suffering.

Example: the baby stops trying to scream to get affection

 

5. Awareness: Now that it no longer strictly identifies with its model, the agent is able to accurately perceive the external forces that previously it was blind to (since they existed outside its description of reality)

This is another major function of meditation techniques: the ability to perceive data points that are outside one’s normal cognitive model.

Example: the baby notices that sometimes mommy smiles more and sometimes mommy smiles less. The baby notices that when it giggles she smiles more.

 

6. Understanding: The agent’s new awareness gives rise to many fresh data points, which it synthesizes into a new, more accurate model of reality that takes into account the external forces.

Example: the baby develops a simple theory of mommy being happy (good) or unhappy (bad) and that when mommy is happy she is more affectionate (good)

 

7. Power: Based on this more accurate model, the ability of the agent to influence reality increases, enabling the agent to achieve its initial goals, thereby alleviating its pain.

Example: the baby increases its skill in manipulating its mommy… it now mixes screaming with smiling and giggling, which gives it more fine-grained control over its mommy’s care for it.

 

8. Creativity: The agent’s increased power creates an opportunity to identify additional goals, now that the agent can influence things that were previously outside its perceived reality.

The new goals are generally a synthesis of the agent’s pre-existing normative outlook with the agent’s enhanced understanding of reality. Often, this new understanding will involve a better comprehension of other agents’ goals, and the synthesis will involve a fusion of its goals with those of other agents.

Example: the baby develops a more playful relationship with its mommy, as it tries to make her happy in addition to making itself happy

 

9. Love: The agent expands its sphere of identification to encompass its new goals, within its new model of reality.

Example: the baby now sees wanting to make mommy happy as part of its identification… it loves its mommy!

 

10. Ego: The expanded sphere of identification becomes the agent’s new normal.

This is back to the beginning of the cycle, and the process begins over again as the expanded sphere of identification introduces new sources of pain.

Example: mommy had a bad day at work, which isn’t part of the baby’s new model of making mommy happy, and now causes the baby pain (and creates a new opportunity for growth)

 

I’ve broken out these steps for clarity, but in practice they flow into each other. Each iteration through the cycle increases the accuracy and scope of the agent’s model of reality. In theory, this process of growth can proceed indefinitely, as nearby agents intermesh their models with each other and collectively encompass and determine an ever-increasing portion of the universe. Foreshadowing: I think this has implications for political philosophy! To be discussed another day.

Anyway, for those interested, here’s some background on what I think agency is in more detail. (Bonus: my solution to the free will vs determinism problem!)

 

What is an agent?

 

Definition: An agent is a normative model of reality.

What does this mean?

Let’s define “model” as a chunk of reality that represents, in a simplified form, another chunk of reality. For instance, an excel spreadsheet with financial data can be a model of the operations of a business.

What does “represent” mean, though? Can I pick out a random rock formation in Utah and say that it represents the U.S. economy?

That seems a little silly, because there is no causal connection between the state of the U.S. economy and the shape of the rock formation. So at a minimum, we can say that representation entails a one-way causal process that makes the model somehow change in response to the thing that it models. In our spreadsheet example, there’s a causal process involving a human that types in numbers, and those numbers vary based on how the business does, which makes our claim that the spreadsheet represents the business at least somewhat plausible.

This notion of representation is still pretty subjective, though. With our spreadsheet, there’s a lot of important information about the business that doesn’t get recorded (for example, what the personalities of the employees are like). And the process that connects it to the business is fallible; sometimes the human updating it makes a mistake (or deliberately fudges the numbers!) So the degree to which the spreadsheet is a representation of the business is still very debatable.

Now, let’s say that the person using the spreadsheet adds an arbitrary number — let’s say 20,000 — next to the “total profits” line. And then subsequently makes business decisions that try to make “total profits” equal 20,000. 20,000 is a goal — it’s a representation of how the modeled system should be.

We now have a feedback loop between the spreadsheet and the business. The person updates the spreadsheet based on how the business performs, and then runs the business based on what the spreadsheet indicates.

At this point, we have an empirical criteria for saying whether or not the spreadsheet is actually a model of the business. If the feedback loop is able to create progress towards the goal, then it is clear that the spreadsheet does in fact represent the business. It doesn’t matter that the spreadsheet is a vast oversimplification, or even if it contains some inaccurate data. If it is sufficiently accurate to give rise to decisions that brings reality into alignment with the goal, then it is successfully functioning as a model.

We can now explain the definition of agent: an agent is any system that involves a model, a goal, and a feedback loop to bring the model into alignment with the goal. This is what I mean by a “normative” model; a model that doesn’t just represent reality, but in fact “tries” to make reality align with itself.

A homing missile is a good example of a simple agent. It has a sensor that detects where its target is; an internal computer that models where it is in relationship to its target, a goal which is to hit its target, and a steering system that serves as a feedback loop that brings adjusts the missile’s trajectory so that it homes in. Even though this is a very simple system, it still displays intelligent behavior: it visibly observes and reacts to its environment.

 

Self-aware agents

 

Homing missiles are intelligent, but they are also predictable. Once you understand the inner workings of its feedback system, you can predict with perfect accuracy how it will behave in a given environment.

Humans, on the other hand, are often predictable, but have the capacity to defy predictions through creatively inventing new behaviors. People do totally original things, such as figure out how to land a rocket on the moon. This is why when we describe a homing missile as “intelligent”, it has a different, lesser meaning than when we describe a human as “intelligent.”

How is this possible? Historically, the operating theory was that humans, unlike homing missiles, aren’t subject to the laws of cause and effect: they have a “soul” that sits outside of the laws of physics, and can generate truly novel, non-determined behavior. However, there seems to be lots of evidence that human brains obey the laws of physics like everything else in the world, and that disruptions to the physical brain can cause corresponding disruptions to one’s thought process, so this explanation doesn’t seem plausible to me.

A counter-theory is that humans exhibit unpredictable behavior because their normative model of reality contains a model of themselves as an agent. In other words, humans don’t just predict the external world and try to bring it into alignment with their personal goals (food, shelter, etc.); they also have a model within their own brains of themselves as agents pursuing those goals.

Someone trying to lose weight, for instance, can predict that they will eat a sugary snack if they see one, and therefore decide to proactively hide all the snacks in their house. This action couldn’t be taken by a non-self-aware agent, which would try to solve the losing weight problem merely by resolving not to eat the snack without taking into account that this resolution would become undermined by changes in its own thought process down the road.

In fact, one theory for the origin of human intelligence is that it came from developing theories of mind for the other humans in one’s tribe, in order to better predict and manage their behavior. Having developed the capacity to observe another person and anticipate how they would react to a given situation, that facility could be turned inward to predict oneself, allowing the possibility of seeing oneself objectivity and adapting one’s own behavior.

This is sufficient to explain how humans can behave truly unpredictably. Alan Turing proved that it is impossible to always determine the output of a sufficiently complex computer program for any given input within a finite amount of time. His proof was that if you could write down an algorithm for doing that, you could write a program that would run the algorithm on itself, and then output the opposite of what it predicted it would do — which is a contradiction, which means that such an algorithm doesn’t actually exist. (In his original example, the question was whether a computer program would halt or whether it would continue on forever).

By analogy, a human agent that has a model of his or her own behavior can predict what he or she is going to do, and then choose to do something else. Therefore, even though human actions arise from causal pathways in the brain, an external observer, even with the most sophisticated neuro-imaging tools, cannot always predict how someone will behave, which is the answer to the puzzle about how a seemingly “determined” person can exhibit creative behavior.

(Strictly speaking, Turing’s proof applies only to formal systems that model computation via a certain set of properties. We don’t know for sure whether or not the brain is such a system, but it’s almost certain it’s at least as complex as one, as demonstrated by the fact that we can mentally model a Turing machine. So I would be surprised if his proof is not also applicable to human behavior).

 

Going forward

I think that the theory that humans are self-aware agents that generate creative behaviors via modeling themselves and choosing other paths has interesting consequences for psychology, AI, and political philosophy. One of those consequences is the agency lifecycle above, which explains the learning process for an agent so constituted. Exploring those consequences will have to wait for another day, however!

 

Written by jphaas

November 18th, 2013 at 4:19 pm

Posted in Uncategorized