Neurotypica

Animals first

Animals first

To find the mind's construction in the face

  • King Duncan

Usually, when people talk about an enthusiasm for 'neuroscience' and the 'brain', they're really interested in cognition—how 'thinking' happens. People want the instruction manual for this device in the head that they've been told controls their behaviour. And maybe, they want the instruction manual for other people's too. But it's not so much the device they are curious about, as the controlling of behaviour.

This common mistake is no-one's fault, really. It's accidental PR. Calling the control of human behaviour 'neuroscience' just sounds good. The fact that we perceive a 'self' perched somewhere behind our eyes doesn't help—it lends weight to the idea that the 'brain' and 'thinking' might be interchangable terms.

In reality though, a close study of human neuroanatomy will tell you very little about human behaviour. There isn't much to be learned about good driving in the internal workings of the carburetor. Similarly, there isn't a great deal to be learned about human behaviour in the nuclei of the basal ganglia.

Honestly, the best lessons in human behaviour come from cognitive science. Models of how thinking and behaving happen that don't particularly rely on the architecture of the brain at all. What people want is a science of the mind, not a science of the brain.

There is value in a science of the brain, though. The number one error that philosophy and sciences of the mind make—over and over again—is in imagining cognitive functions that simply don't exist. Models of the mind that couldn't possibly be true, because there's no way the architecture of the brain could support it. The second major error is overemphasising cognitive functions that aren't actually that important. Models of the mind that are over-complicated, or over-powered for the job we imagine them to be doing.

The reason for this is some deeply held notion that humans are special from other animals. That we are rational, superior, and 'in control' of the world in a way other animals are not.

A science of the brain helps us understand that, all things considered, this is probably not particularly true. Humans are special, but even though our specialness is one of the most obvious things about us, it might not be the most important. At least not when it comes to understanding ourselves. And our specialness is almost certainly not for the reasons we typically assume.

When we use a science of the brain to scaffold our understanding of the mind, we come to make sense of "the halt, the lame, the half-made creatures that we are". Not because we're flawed, but because before we are human, we're animals first.

The structure of the brain

The structure of the brain

Any conversation about the brain relies very heavily on understanding the architecture of the thing.

We don't, however, need to know everything about the architecture of the brain. Not if what really interests us is the mind. We simply need to know enough to get a feeling for what kind of thinking is possible in the structure the brain provides.

Before we do that, though, we should make some points very clear that often get lost among the flightier discussions in neuroscience.

First, the brain is merely the most recent addition to the nervous system. The nervous structures are simply a means of passing information around the body to coordinate perceptions and actions. The brain is primarily (some would say entirely) an extension of this function. Anything we suspect the brain might do beyond that is fairly speculative.

What this means is that the only thing we can really be confident about in the brain is that it can transform input to output. It can take in information about how the world is and transform that into information about how we should move our bodies in response.

What is much less clear is how, and to what extent the brain does anything else. In 1967, Kenneth Craik brought the term 'cognition' into the psychological sciences. He noticed that not all animal behaviour can be explained by understanding the animal's history of learning to respond to various stimuli. In particular, when a stimulus isn't actually there, animals can still respond.

For example, if I begin describing a dripping, juicy lemon, you might feel your mouth starting to water. Somehow, your brain is producing the lemon stimulus and a corresponding reflexive response to sour tastes without the lemon existing.

This is the core of cognition---how does the brain 're-present' stimuli that aren't there to be directly perceived---and it is essentially unsolved. And yet, researchers have a problematic way of confusing themselves into thinking otherwise. Don't let yourself be tempted to do so as well. Be confident about transformations of input into output, of perceptions into actions. Learn the shape of this in the human brain as you read along. And let that be your north star when it comes to speculating about any kind of 'thinking' that is not a transformation (and question whether there even exists such a thing at all).

Secondly, you should pay attention to the fact that the brain operates on many different structural levels, from molecules to cells to networks of brain regions. It also operates on many levels of analysis, from the 'hardware' just described, to the computations that hardware supports, to the ways in which those computations are implemented.

The brain, therefore, operates on many levels that are entirely non-intuitive to us because we are not molecules, networks, or algorithms, but people. As such, our conclusions about what the brain is doing, or how and why, are often at best rough approximations and at worst completely off-base. We should, therefore, focus on working out what kinds of approximations are sensible.

I will highlight these as we go along, although your first lesson should be memorised already---thinking about how behaviour happens in terms of transformations is usually an excellent approximation and sensible indeed.

And so, without further ado, we will take a whirlwind tour through the 'gold standard' textbook of neuroscience, with some additional readings to go alongside the chapters.

Kandel, Eric, Schwartz, James, Jessell, Thomas, Siegelbaum, Steven & Hudspeth, A. J. (2013) Principles of Neural Science

And while you're going along, you might want to play with a model brain.

Nerve Cells and Neurons

Nerve Cells and Neurons

read Chapter 2 on nerve cells and neurons

I would suggest skimming through this chapter somewhat, but not much. It's a slog, but neurons are the basic building block of the brain and if you're serious about understanding the value of brain science, you should probably understand these.

Neurons are the basic signalling unit of the brain. They, like all nerve cells, are responsible for mapping perceptions to the appropriate actions, and as such, the way neurons are wired together determines an enormous proportion of our behaviour.

The detail in the textbook chapter will give a sense of the nuance in these structures and their interconnections. It will also give you a sense of their limitations. The most important of these is that information cannot pass through the brain magically. It has to travel a pathway limited by which neurons are connected to which other neurons (and how). 'Pathways', then, are a good approximation to use when thinking about how the brain works as it relates to neurons.

Those pathways are given to you by evolution, and then developed by you as you go through life. The strongest pathways are the ones you have trained the most and, since you cannot control the electrical firing of your neurons directly, means that there's only so much you can do to change the pathways that you're using in the moment.

Importantly, there are other kinds of cells in the brain. 'White matter' or glial cells. White matter fits all around the neurons (the 'grey matter') and takes care of them---protecting them, repairing them, and improving their performance.

What is interesting is how little we focus on white matter. It has been mostly neglected in research. But more recent research is indicating that white matter might be substantially more important than we think.

For an amusing digression on this topic, I direct you to a friend of mine's amusing and provocative paper on the subject (please do not share this):

Kaldas, Antonios (2014) 'In Praise of Mushiness'

But possibly the most useful approximation we can consider when it comes to white matter is that they very much influence pathways. The more you practice something, the more white matter will grow around those pathways---all the better to support them in doing this thing that we do all the time. This is wonderful for speed and performance. But it means that these neurons, now surrounded by all this construction, can't change their connections very easily anymore. It means these neurons are quicker at doing something, but much less good at doing other stuff. They are less flexible.

For an interesting example of what this means in practice I recommend this article: Teenage brains aren't undeveloped, they're just doing something else.

Neurotransmitters

Neurotransmitters

a necessary tangent to neurotransmitters

It is worth exploring briefly the molecular level also---the actions of neurotransmitters and other neuromodulators. This is better done for our purposes on the Wikipedia page than the textbook. Especially the section on actions.

However, though you might have heard people often talk about the 'reward neurotransmitter' or the 'love hormone' or the 'happiness molecule' and so on, and although we know some actions of these neurotransmitters, we actually have very little idea about how those actions play out in actual behaviour.

For example, serotonin is known as the 'happiness molecule'. We know it has a relationship to mood. Interesting evidence suggests this is so. For example, we think that the despondent 'come down' after taking drugs like MDMA is related to a depletion of serotonin in the brain.

But if you just flood the brain of a depressed person with serotonin, they don't get happier. The anti-depressant that does this only improves symptoms for something like a third of people. A similar number to simply providing a placebo pill instead. More interestingly, people taking these drugs are more likely to try and kill themselves.

I wouldn't draw too many conclusions from this, because many other factors are at play there. But it certainly means that calling serotonin the 'happiness molecule' is overstating the case.

Looking at neurotransmitters to explain human behaviour is something like looking at the dust in the nose of the sneezing person on the 45th floor to determine something about that company's supply chain. It might be asbestos, making everyone ill and driving down productivity. Or it might just be dust.

Indeed, I'm frankly never quite sure exactly what value neurotransmitters have in actually understanding brain and behaviour better. Only that it makes people feel smarter to say things like 'doing x thing releases y neurotransmitter which has z psychological benefit'. I have never once heard a construction like that where the reference to the neurotransmitter was anything more than cosmetic (i.e. dropping the neurotransmitter bit would typically make zero difference to the meaning of the sentence). More to the point, I've never heard one of these where the link between the neurotransmitter and the benefit was as legitimate as implied. They are psychological snake-oil and should be referred to sparingly, knowing that all you're doing is playing a confidence game with your audience.

All you really need to know about neuromodulators is that changes in the concentrations of these molecules across populations of neurons have difficult to understand effects on the way we learn and respond to the world.

The best known effects are related to simple learning mechanisms. Changes in neuromodulators signal to our neurons that they should change their pathways. That the pathways that exist might need to be updated. You can read about the effects of those changes in these articles on classical and operant conditioning. But notice that nowhere in those articles do I mention neurotransmitters, because it serves no practical purpose to do so. The only real value comes from knowing that those two kinds of learning are all about the neural pathways that develop in response to the statistical structure of the environment we live in and our habitual responses to that environment. Put a different way, that certain perceptions usually have certain consequences, and that often certain responses are best when dealing with those consequences.

To get a real sense for the complexity in neuromodulation, I'd suggest looking at Chapter 46 of our textbook to the extent you can stomach it, and this bonus article for the mathematically-inclined, with emphasis on the table page 413:

Doya, Kenji (2008) 'Modulators of Decision Making', Nature Neuroscience 11, 410

The nervous system

The nervous system

read Chapter 15 on the nervous system

Recall that the brain is merely an extension of the evolutionarily ancient nervous system. Exploring this idea is extremely helpful to us in understanding the brain.

A nervous system helps coordinate perceptions and actions. The more complex the nervous system, the more perceptions and actions are possible.

Information comes in through our senses, it travels from nerves at those sites following a pathway that journeys through the spinal cord, the brain stem, and then often a structure roughly in the centre of your brain called the thalamus. From there, the brain is made available for this information to travel---the thalamus is often called the 'relay station' of the nervous system. From the brain, the information will be transformed from the perceptual information it came in as to the motor (movement) information it needs to go out as so we can deal with our environment.

That information will travel more or less of this journey, depending on the complexity of information. For example, some reflexes never need to go all the way to the brain. When the doctor taps your knee, your spinal cord has all the information it needs to send an impulse to kick back to your leg. Some information needs to spend quite some time in the brain, undergoing various transformations, to be useful. Taking verbal instructions for example---in through the ear, and then as we saw in Chapter 1, a ricochet around various regions in the neocortex as you translate spoken word into some kind of representation of action and back out again.

The most important feature of the nervous system in the brain, then, is how it has organised itself to help this kind of processing, and what approximations that gives us for understanding behaviour better.

The cerebral cortex, or neocortex, appears to be primarily providing additional processing space for the nervous system. A big sheet of vertical columns of neurons, subdivided into 'neural maps' of our different senses. This particular idea will become quite important later.

Subcortical regions appear more specialised, organised into 'clusters' or nuclei, and probably primarily help to regulate or modify the processing happening in the cortical regions.

All of these are extensions of the central nervous system, with the spinal cord as the highway from the senses to the brain and back again. The approximation we can take from this, is that information comes into the brain and traverses the cortex to transform itself from input to output. Along its journey, it takes in more specialised information from the subcortical regions---information about times we might have had similar perceptions, or about the emotional content of the situation, and so on. This all results in some kind of response, which is fed back out into the nerves responsible for moving us around.

The peripheral nervous system feeds the central nervous system, acting as a bridge from your spinal cord to your senses. It feeds in information about the world and information about the body.

But the peripheral nervous system is much older than the central nervous system, and it can do plenty of processing all by itself. It was the brain before brains were a thing, and as such it rents out its own space in this much newer structure everyone makes such a fuss about---it gets 'grandfathered' in, so to speak.

We will return to this idea shortly, but a very critical approximation is that there is nothing that the central nervous system does that isn't mediated by the peripheral nervous system. The PNS has first contact with the information, and the CNS and PNS are married to one another all the way along that information's journey to the brain and back.

To put it another way, many people will say that the central nervous system is about voluntary action and the peripheral nervous system is about involuntary action. But for our purposes, we must understand that there is no such thing as voluntary action without at least some influence of involuntary action, from the way our senses work, to the peripheral nervous system's intense concentration on homeostasis---keeping our body in balance---to the fact that vast majority of our behaviour is reflexive and automatic.

At this point, it might be useful to read Chapter 47, on the autonomic motor system (a subdivision of the peripheral nervous system). If not here, then you will when we get to the brain and emotion shortly.

Brain regions

Brain regions

read Chapter 1 on regions of the human brain

We typically think of the brain as anatomically and functionally localised. That is, certain regions specialise in certain functions. For example, we have the 'language centres' of the brain---a coin-shaped patch on the left side about halfway up. This brain region, we think, is crucially involved in producing language.

These kinds of regions then interact with other brain regions to get more complex jobs done. For example, when we think about speaking, the language centres are active, but so too are other brain regions that we think are involved in reflective thought at the front and sides of the brain.

The most useful approximation that emerges from this feature of the brain is that certain regions of the brain are particularly important in certain kinds of activities.

It should be noted that increasingly, researchers feel this view of the brain has been overemphasised. But it remains a very sticky idea and much of our thinking about the brain centres, for better or worse, on this idea. We will return to this tension later on.

Brain networks

Brain networks

read Chapters 17 and 18 to bring everything together as neural networks

We are now in a place where we can discuss the brain in full-fledged operation.

We started off by learning that specific regions of brain are associated with specific kinds of information. These regions must work together because our world, and our interactions with it, are typically comprised of many different kinds of information at once.

We then learned about our neurons---chains of interconnected signalling devices that make up the 'pathways' for information to travel. These are supported by white matter: the infrastructure that supports the neurons in moving information around.

Some of these pathways are handed to us by evolution, but many more are developed by us over time. The biological key to the growth and development of these pathways are the neuromodulators, but what matters for practical purposes is the statistical structure of the environment we live in and the ways in which we respond to it.

In this way, the brain is fundamentally an extension of the nervous system. It takes in information about the world, sends it to the brain if that information is complex so it can use the extra processing space available in the neocortex. Along the way it picks up other kinds of information from more specialised nervous structures like the peripheral nervous system, and subcortical brain structures. In this way, the journey transforms the information from information about the world into information about how we should respond to the world. Then, that information is sent back down the spinal cord to do its magic.

This begs the question: how does information about the world get transformed into information about the response?

We have already discussed some of this. Classical and operant conditioning describe mechanisms of learning that simply map certain kinds of input to certain kinds of output in a more or less 1-to-1 fashion.

William James, the 'grandfather' of psychology, describes an amusing anecdote that captures this in humans. He tells of a time in which he went upstairs to change for dinner, and instead found himself changed into pyjamas and getting into his bed. His pathway for 'changing for dinner' overlapped with his pathway for 'changing for bed', and he unwittingly took a turn down the wrong one.

All that is required for such a thing is for a pathway to exist that links some particular combination of inputs (sights, smells, familiar ways of moving our bodies) with some particular combination of outputs (a habitual way of responding to the world). Going to the wardrobe is often linked to changing clothes. Changing clothes at night is often linked to putting on pyjamas for bed. Common inputs are mapped to common outputs in the vast majority of actions we take day-to-day.

In the brain, we have mapped some of those pathways in the brain.

Our bodily organs, after travelling the nervous journey, eventually 'plug in' to our cortex at various points. The eyes plug into the back. The ears into the side. Our faculties of touch and movement at the top and side. And so on.

At these sites of connection between the cortex of the brain and body, we have beautiful 'maps' of features of the world. At the sides of the cortex, where the ear is connected, we have small clusters of cells that fire for specific frequencies of sound. At the back, where the eyes plug in, we have clusters of cells that respond to the very basic features of the visual world—colours, specific orientations, changes in contrast and so on. And where our bodily organs plug in, we have a map of of the body—a map of what we feel and a map of how we move.

The picture gets more complicated as we move away from these primary sensory regions, but what the cortex appears to be doing is storing information related to the interaction of those more primary sensory regions nearby---about less specific and more general features of the world that relate to multiple kinds of information.

I direct your attention to the Chapter 18, Figure 18-3, on the dorsal (going upward) and ventral (going downward) streams in the cerebral cortex as a guide. Here, we are talking about various parts of what's labelled 'association cortex'.

For example, as you move away from the region of the cortex that codes for body parts we move and toward the vision part and you start to see clusters of cells that enthusiastically respond to things like motion perception—something that is both about moving and seeing movement. Move away from the vision part and towards the hearing part and we see regions of the cortex that frenetically respond to things in the world that are both audio and visual, like a barking dog. Move away from the hearing part and towards the body part and you find the language centres of the cortex—something that is both about hearing speech and producing speech with our body.

Essentially, things that are related to one another, the cortex tries to put very close together, constrained in large part by where our senses are plugged into the cortex.

This is a sensible strategy of course. The only reason storing chunks of meaning about the world to process it would be useful is if we could link them to one another to make more complex meanings. The shape of a dog at the eye combined with the sound of a dog at the ear helps us understand that this is a dog and not just a picture of it.

The more related such things are to one another, the closer the cortex puts them together. There could be any number of reasons for this, but in all likelihood it's simply because doing so is cheaper. You save on wiring and the energy required to pass messages from one place to another.

You end up with a view of the brain that is mapped to the structure of the world. This is most clear in the motor cortex. Michael Graziano describes what he calls 'ethological actions maps': that regions of the motor cortex do not just correspond to specific parts of the body, but specific actions that we habitually make. In monkeys, for example, there are regions that, when stimulated electrically, will bring the monkey's hand to it's face as though it is feeding itself. Nearby is a region that will make the poor monkey chew.

Graziano, Michael S.A. (2016) 'Ethological Action Maps: A Paradigm Shift for the Motor Cortex', Trends in Cognitive Sciences 20, 121--132

It helps to think of the whole brain in this manner, an ethological action map. But to do that we must move from a conception of the regions of the brain, and the pathways that journey through them, to the networks of the brain that connect these regions.

It is not so complicated as it sounds, nor as it is often explained. Instead of thinking about the language centres of the brain, we might think instead of the language network of the brain. Returning to Chapter 1, Figure 1-6, you can see this network in action. Looking at words, listening to words, and speaking words are all regions connected as a network. When you are thinking of words, you need to activate the entire thing.

So, thinking of neural networks is one important approximation we should make, not merely being restricted to regions and pathways.

A second important approximation stems from a peculiar problem of the brain.

To illustrate, I'd get you to read a list like this:

red

green

blue

Then I'd ask you to name the colour of the ink, as opposed to reading the words. You're going to find that latter task much more difficult. This is probably because you read words all the time, and similarly because you hardly ever name colours. So you want to read the words, but I've told you to name colours and you must somehow overpower the automaticity of reading to do that.

This particular task has fascinated researchers for the last 90-odd years, because we really aren't quite sure how the brain does that overpowering. But you should be pretty clear by now about what's happening up until that point.

Putting our approximations to work, the general idea is that, in this task, the brain has two pathways from input to output. One pathway for word reading, and another pathway for colour naming. You can imagine that these two pathways run from the eyes to our brain, squiggle around for a bit, then eventually come out and end up at the muscles controlling the mouth. All along this route, these pathways overlap. At the places they overlap, there might be conflict. For example, you can't read words and name colours at the same time, because you only have one mouth. Conflict: one must win.

You can imagine similar kinds of interference happening in the brain, they both probably rely on similar networks. For example, our language network from earlier. And each time this happens, the word reading pathway is stronger and more dominant than the colour naming. They are both competing for the same resources---the networks they both need to finish their journeys.

And thus, the effortful feeling of trying to force the colour pathway to 'win'.

The curious question then becomes, how on earth does the brain choose which pathway gets to win?

In the brain, this is far from clear.

We know this happens, because of things like the task just described. We call them 'executive functions' or 'executive control'. But we frankly have only the slightest idea about how these functions might happen in the brain.

These hypothetical executive functions have variously been described as a collection of single, specific functions, or one unitary power. Every time they appear, though, they are thought to somehow arbitrate over our lower-order processes (pathways or networks) to achieve a goal. But honestly, it's all just speculation. The best article for a technical explanation of how this might work that I have come across is this one:

Botvinick, Matthew Michael (2012) 'Hierarchical Reinforcement Learning and Decision Making', Current Opinion in Neurobiology 22, 956--962

But it is painful to read I warn you. I think the crucial thing, is that it's worth keeping in mind that we probably have some mysterious 'executive network' that probably does things like this. But we shouldn't be too enthusiastic in invoking it to explain behaviour because we have no idea how it works. We end up with what's known as 'the homuncular problem of cognitive science' in which we populate the brain with little people who are controlling us. It's a lazy way of not thinking about how thinking happens (because you've yet to explain how their brains work, so you haven't actually explained anything at all).

We do know roughly where this executive network is, assuming it exists.

Recall that in essence, certain regions of the brain appear to represent aspects of the world in detail, and as one moves away from these regions, it represents more general features of the world. If one follows that logic to it's conclusion, one would end up with some regions of the brain that were quite general indeed---coding for the most abstract ideas and concepts. These domain-general regions would likely be involved in more-or-less all activities that were not heavily practiced or heavily tied to one specific aspect of the world.

Many of these regions seem to be in the prefrontal cortex, but there are actually a lot of these anywhere more than one domain-specific region intersects with another. If you return to Chapter 18 Figure 18-3, you can probably guess where these are: anywhere they have called an 'association cortex' has spots of brain that respond during almost any difficult task, or task which requires some kind of conflict resolution. We might call these 'frontal-parietal networks' (networks at the front and the side).

And so, our second approximation is that there are networks that probably help coordinate the transformation of input to output. They arbitrate over them, or otherwise guide them when there's conflict.

There are a couple of other useful networks to consider, alongside our possible executive network that might play roles like this.

We have a salience network that seems to be a broad swath of the primary sensory regions of the brain and some other bits of association cortex involved in helping us determine the salience (relevance/importance) of stimuli. We see this active whenever we're paying attention to the external world, and not so much when we're paying attention to our internal world. This necessarily involves interacting with our emotional processing because there is little we pay attention to that isn't either good or bad in some kind of way, or else why pay attention to it at all?

We also have a default mode network, which seems to be a network most active when we are reflecting internally---daydreaming, imagining, reflecting on the past, and so on.

Importantly, all three of these networks are enormous, and probably could be broken down into any number of subnetworks. Or perhaps not. Additionally, these networks have some overlap. But it is useful to note that we can roughly group human behaviour according to the brain into 'attending to the world out there', 'attending to the world in here', and 'attending to the world when there's some kind of conflict in perceiving and acting'.

I will provide here some optional additional reading on thinking about the brain as a whole. Particularly the executive network, to give you a sense of the complexity.

First is Barbara Webb on the way scientists talk about information in the brain. The brain mostly does transformation, as we've discussed. Representation should be a special case---the core problem of cognition, how does the brain 're-present' things when they aren't there. Don't be encouraged to confuse these things, as many researchers do. (Re-presentation probably has something to do with the default mode!).

Webb, Barbara (2006) 'Transformation, Encoding and Representation', Current Biology 16, R184-R185

The second is as the first chapter of Michael Anderson's book 'After Phrenology', which is a bit radical but gives a lot of perspective. The long history of functional and anatomical localisation of things like the 'language' centre of the brain naturally makes us think of the brain in terms of 'modules'. Modules for language. For emotion. For 'executive function' and so on. This is true of older brain science which tried to find these in specific regions. It's true of the newer network approach which tries to find these in our distributed 'salience networks' and what not. Anderson suggests that there's no particular reason the brain needs to localise these things to specific networks or regions at all. If the cortex really is a kind of general processing structure, then you could just create these networks, in whatever configuration or whatever locations were most convenient at the time. Kind of like the RAM in a computer. Just use, or 'reuse' whatever is free. And indeed, there is evidence to suggest that the brain does this at times. I don't know how useful this idea is in practice, except to emphasise that probably the brain isn't actually that helpful in understanding behaviour. It really is about the structure of the world and the kinds of problems we need to solve to navigate it:

Anderson, Michael C (2014) After Phrenology: Neural Reuse and the Interactive Brain

Then three different theories about what the executive network is. I like them all, but they remain theories.

First, a research group who thinks that there are only three executive functions worth talking about: shifting attention, updating attention, and inhibiting things:

Miyake, Akira, Friedman, Naomi P., Emerson, Michael J., Witzki, Alexander H., Howerter, Amy & Wager, Tor D. (2000) 'The Unity and Diversity of Executive Functions and Their Contributions to Complex "Frontal Lobe" Tasks: A Latent Variable Analysis', Cognitive Psychology 41, 49--100

Another theory that suggests the executive network might actually just be some kind of ability to switch between lots of smaller networks quickly:

Barbey, Aron K. (2018) 'Network Neuroscience Theory of Human Intelligence', Trends in Cognitive Sciences 22, 8--20

And finally, a theory that holds that the executive network is actually a single core process of focus and integration of information, spread around those association regions of the brain we looked at in Chapter 18 Figure 18-3:

Duncan, John, Assem, Moataz & Shashidhara, Sneha (2020) 'Integrated Intelligence from Distributed Brain Activity', Trends in Cognitive Sciences 24, 838--852

Read, or don't read, but notice how each of these three theories seem to be at odds with one another. Welcome to my world---the ambiguous frontiers of cognitive neuroscience.

And finally, an article on how the brain makes meaning that describes very similar ideas but from a different angle.

Emotions and the Brain

Emotions and the Brain

read Chapters 47 and 48 on emotions and the brain

Let's talk about emotion and the brain now. There are a couple of important approximations that we need to draw out here.

The first is, as we mentioned before, there is no sense talking about the central nervous system without considering the peripheral nervous system.

A primary role of the PNS is to inform the CNS about the state of the body. We could call this something like a 'perception' of the body, similar to a visual perception or an auditory one. Another kind of sense.

This particular sense helps us to understand what is worth paying attention to in the world. In very brief terms, what is good and what is bad. This is related primarily to homeostasis---keeping the body in balance. So at the same time as we see or hear or feel something, we also get a visceral sense of what that means for our body. Food is good when we're hungry, and otherwise uninteresting. Rotten food is bad all the time. You feel these kinds of things in your body. This particular sense is a precursor to emotion. In fact, the distinction is arbitrary. There's no reason not to call this an emotion at all.

This feeling is then sent to the CNS for additional processing. Here is where we get a more nuanced picture of things---it's not just bad, but scary, or it's not just good, it's sexy, and so on.

The important approximation here is that it is hard to imagine a time in which there is perception without some kind of emotion, if only that very preliminary, visceral sense that emanates from the body via the PNS.

The second is that there's no reason to do anything all unless there's an emotion surrounding it! Why pay attention to the world unless it has some meaning? And what would be the meaning, if it wasn't an emotional meaning?

This is particularly important when we start to talk about 'rationality' and other such concepts. If you suspect that the way you feel and are planning to act is not 'rational', but rather 'too emotional', you might decide to do something else. But that something else is still grounded in some kind of emotion---the idea that somehow it's more 'good' to do something else. This sense of 'goodness' comes from the same systems that generated your more impulsive pattern of thinking. All you've done is decided to make one kind of emotional decision over another! It just might be that you're taking into consideration some different input for that emotional decision. Perhaps thinking of some imaginary future which is more powerful or motivating than the events of the present.

So, our second important approximation is that there is probably no kind of action without emotion.

No perceptions, no actions, no decisions, come without emotional baggage.

And all of those things are related to how your body remembers the environment---how good and how bad have those perceptions been in the past. Or alternatively, how good and bad you might anticipate them being in the future.

To ground the idea, I'd direct your attention to this article on predicting human behaviour. I'd also recommend trying to read Katharine Peil's excellent article on emotion as a self-regulatory sense:

Peil, Katherine T (2014) 'Emotion: The Self-Regulatory Sense', Global Advances in Health and Medicine 3, 80--108

Memory

Memory

God gave us memory so that we might have roses in December. ― J.M. Barrie

Our final keystone in understanding the brain and behaviour is in a study of memory. This is a bit of a mess, but with what we already know, we can make some useful distinctions.

Trivially, we have long-term memory, short-term memory, and working memory. Short term memory refers to when we pick information up, but don't manage to store it for the long term. Long term memory refers to information that we have managed to store for the long term. Working memory refers to information that we are paying attention to right now. This could be information we just picked up or information we had stored for the long term.

Another trivial sort of approximation we can make is that there are implicit and explicit forms of long-term memory. Implicit memories are those that we aren't conscious of. Think of muscle memory (procedural memory). Explicit memories are those that we are conscious of, like semantic information (dates, names, places, etc) or episodic memories (memories of events or 'episodes').

We can slice these things up even further if we like, but I'm not sure what value that has. A more useful exercise is thinking about what kind of architecture could support these kinds of memories.

Probably a great deal of memory relies on the pathways we create and develop in the processing space of the cortex. Those beautiful neural maps already describe the world and your interactions within it. Why would you use a seperate structure to remember all that information? Wouldn't be very efficient.

This is where learning comes in. Remember that we have networks and regions that are highly specific. Think of our ethological action maps. We also have networks that are much more general, like our executive network. The more domain-general networks and regions are used in tasks we find difficult, new, or ones that we rarely engage in. Learning appears to be a process of moving the pathways that these rare or difficult tasks use from the more domain-general resources (like the executive network) to more specific ones (like the ethological action maps closer to the primary sensory cortices).

You might wonder, "aren't the executive network/s about controlling behaviour when there's conflict or arbitrating over lower-order processes?" The short answer is, not always. These kinds of networks are active for almost all tasks, but far more active during difficult or novel tasks. It might be that these regions act as a kind of domain-general resource---a sort of blank slate of extra processing power. Remember that these regions appear in places where multiple kinds of domain-specific information processing regions meet each other. These domain-general regions might act as a buffer to hold onto this strange or rare information until your brain can figure out how to link the information coming in to the information that needs to go out.

However, there is another answer that is worth considering. It might very well be the case that these executive networks aren't doing much controlling or arbitrating at all. They might just be incomprehensibly complex pathways that the brain defaults to using when things aren't well practiced. Somehow, in those tangled by-ways of the brain, conflicts end up finding a space where there isn't overlap and things just sort themselves out. It's not easy to imagine, but we don't actually have any concrete explanation for how the brain solves problems like this, so this is as good as any. I'd refer to again to Anderson's book 'After Phrenology' again for one very well thought out example of this.

What is absolutely clear is that when we do practice things, there is a consolidation of those pathways---the white matter grows to support and improve the performance of these pathways.

These consolidated pathways, then, are a form of memory. In these pathways are information about the senses, about the movements, and about the body and the emotions that are related to the world we have been acting in.

That, in itself, is sufficient to explain implicit memory. To explain explicit memory, we need some kind of mechanism that can access these pathways without requiring the actual stimuli to be present.

And we have come back to the problem of 'representation' in cognitive science. How can the brain 're-present' stimuli when the stimuli aren't there?

And once again, we don't have very good answers to this question. It certainly seems like the hippocampus is very important for this. With damage to the hippocampus, people have a great deal of trouble remembering events. Funnily enough, this might have something to do with space. In rats, the hippocampus will generate 'place fields'---essentially a neural map of the places they are in. The longer they have to learn the place, the more detailed the map and the more stable over time. You can, from these maps, predict where the rat is in it's environment. You can read more about this in Chapter 67, heading 'A Spatial Map of the External World is Formed in the Hippocampus'. If you really think about it, what are our episodic and semantic memories but information about space? Both three dimensional space, but also space across time? It might be that the hippocampus has some kind of complex four-dimensional mapping of the pathways in the other structures of the brain to our history in the world, and this gives rise to more complex forms of memory.

But enough speculation. The primary approximation that we should come to grips with here is that memory, for the most part, almost certainly uses the same architecture as our processing of the present. As an interesting corollary, our imagination probably does too. You're basically remembering the future by putting together existing pathways in likely configurations that haven't happened yet. This is poignently illustrated by the work of Adrian Owen. He asks comatose patients to imagine playing tennis, and a surprising number of them will activate very similar pathways we might expect to see in someone actually playing tennis.

To really nail this home, I recommend reading this article on repressed memories.

Anatomy Wrap Up

Anatomy Wrap Up

It is at this point that I would recommend reading a book to tie your understanding together a little more tightly. For that, I recommend:

Swanson, Larry W. (2003) Brain Architecture: Understanding the Basic Plan, Oxford ; New York: Oxford University Press

It is worth also getting a sense of just how difficult brain science is:

Here is me complaining about my main concern which is that brain science doesn't really test the real world, so much as things we've set up to be predictable.

And here are some amusing, or provocative articles worth considering for a bit of colour:

Hommel, Bernhard, Chapman, Craig S., Cisek, Paul, Neyedli, Heather F., Song, Joo-Hyun & Welsh, Timothy N. (2019) 'No One Knows What Attention Is', Attention, Perception, & Psychophysics 81, 2288--2303

Jonas, Eric & Kording, Konrad Paul (2017) 'Could a Neuroscientist Understand a Microprocessor?', PLOS Computational Biology 13, e1005268

If we understood the brain, would we even know it?

Evolution, brain, and behaviour

Evolution, brain, and behaviour

Evolution is often an overrated tool to interpret animal behaviour. Any number of evolutionary stories can be dreamt up, and none of them can be directly tested until the advent of some kind of time-machine. But in the case of analysing brain and behaviour, it tells us two very important things:

  1. Humans share structures with other animals; and
  2. Humans share functions with other animals—and not simply functions that appear in those shared structures. Some functions appear in even very phylogenetically different structures (i.e. structures with different evolutionary paths or histories).

This has two important corollaries:

  1. There is something about the brain that reflects our evolutionary development—we are animals first; but also that
  2. There is something about the brain that reflects the structure of our world.

We are animals first, and much of our brain is likely to serve animalistic kinds of thinking and acting: instinct, impulse, automatic responding. This is not only because our brains are related to those of other animals, but also because the world forces or encourages certain evolutionary answers to questions of survival and reproduction even when animals develop in very different evolutionary directions. As such, many kinds of thinking that is 'clever' are probably available to animals to the extent that they need them in their ecological niche.

As disappointing as it is, we must be very careful not to view human behaviour as 'special' until we are confident that this is so. So many of our ideas about brain and behaviour have led us down garden paths toward a vision of 'rational' and superior humans floating above the evolutionary cesspit. The truth is much messier and we are simply half-made creatures made up of the same stuff as everything else around us.

Adding evolution to anatomy

Adding evolution to anatomy

Swanson, Larry W. (2003) Brain Architecture: Understanding the Basic Plan, Oxford ; New York: Oxford University Press

This book will emphasise almost all of my points automatically, and is a reasonable introduction to the complexity of nervous systems in animals. Worth a read, end to end, but best done I think after a grounding in basic human neuroanatomy.

Striedter, George F. (2005) Principles of Brain Evolution.

This book is rather dense, but quite excellent. We will take some select chapters here and if nothing else, I encourage you to read the conclusion.

Chapter 3: Conservation in Adult Brains (emphasis on the conclusion)

Evolution can happen at the level of the cell, of the molecules used by the cells, and of the regions comprised by cells. As Striedter notes "evolution has repeatedly found different ways to build the same (homologous) structures ... [and to] combine old materials."

Thus, similar looking brain is not quite the same thing as similarly functioning brain. Brains reflect an evolutionary history, but they must most importantly reflect the ecological niche of the animal. At times that ecological niche requires the same basic tools as our evolutionary peers, but at other times it requires different emphases.

Chapter 8: what's special about mammals? (emphasis on the conclusion)

Neuroscience has focused a great deal of attention on the mammalian 'neocortex', a relatively recent brain structure that appears to have a relationship to mammalian intelligence.

Non-mammals, in many ways, are as smart as mammals, and lack this structure.

Intelligence, therefore, is not a function of any specific brain region. Rather, intelligence develops independently, because animals need to solve problems and evolution will use the tools it has available to do so.

That said, the neocortex almost certainly provides some functionality that other homologous structures (similar in function, but not in kind) do not. By implication, the inverse is probably also true.

Chapter 9: what's special about humans? (emphasis on the conclusion)

The difference between human brains and the brains of other mammals is very difficult to pinpoint.

There are differences though, both in size and organisation.

There are several likely implications of this, but once again they point point to the needs of our ecological niche, rather than some kind of difficult to define 'specialness'.

Questions of evolution

Questions of evolution

The point of using an evolutionary perspective to think about brain and behaviour is to really drive home that humans are animals first. We should be very resistant to ideas about our specialness because this regularly leads us to imagine that we have access to magical brain functions. This is not true, and I've collected a series of articles that should emphasise that.

First, a piece on motive: Why do we want to think humans are different

Cisek, Paul (2019) 'Resynthesizing Behavior through Phylogenetic Refinement', Attention, Perception, & Psychophysics 81, 2265--2287

Does cognition even exist? In 1911, psychology Pioneer Edward Thorndike speculated that the capacity to control impulses---percept-action mappings---might simply be a question “of an increase in the number, delicacy, and complexity of associations of the general animal sort” (1911, p. 286). More than 100 years later, Paul Cisek is asking a similar question. In evolutionary terms, it seems not impossible that cognition can be explained by the proliferation of straightforward behavioural control mechanisms in a hierarchy of control, and cognition, or 'thinking' is merely an illusion.

Lyon, Pamela (2015) 'The Cognitive Cell: Bacterial Behavior Reconsidered', Frontiers in Microbiology 6

Viewed in a certain light, even bacterial cells share certain functional homologies to our far more complex brains.

See also: Vallverdu, Jordi, Castro, Oscar, Mayne, Richard, Talanov, Max, Levin, Michael, Baluska, Frantisek, Gunji, Yukio, Dussutour, Audrey, Zenil, Hector & Adamatzky, Andrew (2017) 'Slime Mould: The Fundamental Mechanisms of Cognition', arXiv:1712.00414 [cs]

Article: Honeybees are smarter than they should be

Honey bees do a lot of things that they shouldn't be able to, with such tiny brains.

In particular, they can solve 'human-like' problems (badly). This indicates that 'human-like' problems are probably only different in degree, not in kind, and we should question very carefully what 'human-like' cognition really is.

Barron, Andrew B, Halina, Marta & Klein, Colin (2019) 'Types of Brain, Types of Mind: The Major Transitions in the Evolution of Cognition', White Paper

You will need to ask me for this article, because I'm not sure that it is ready for sharing. It is from this research group, if you'd like to see what collateral they have online so far.

Your alternative is this, much much denser article: Ginsburg, Simona & Jablonka, Eva (2021) 'Evolutionary Transitions in Learning and Cognition', Philosophical Transactions of the Royal Society B: Biological Sciences 376, 20190766

There are a number of broad cognitive functions that can explain behaviour, regardless of the actual architecture of the system (i.e. cellular signalling, simple nervous systems, or more complex nervous systems like those that require a brain).

For example, a nervous system that is simply a network of nerves (e.g. that of a jellyfish) allows system-wide coordination of perception and action, but cannot make centralised decisions. A network that is centralised (i.e. it all feeds into some central point, like animals with a central nervous system and culminating in a brain) allows for unified decision-making that can integrate information from across the whole network.

The implications, beyond those for e.g. AI, are that it is not so much the brain that matters as the ability of something to connect perceptions to actions and intervene on those relationships in various useful ways.