9.11.2008

Large Hadron in da house

If it generates a black hole that eats the earth from the inside out, at least it also generated a sweet rap.



Pretty awesome.

read more

7.31.2008

the McGurk effect

On my other blog, I posted a video of a French street performer. I think it's a funny video in and of itself, but I also put English subtitles on it. You can be the judge of whether this makes it more or less funny than before.

I find the effect of the subtitles strangely robust. I don't speak French, but my wife does, and when she watches the clip she says that, ever since she saw the subtitles, she has to concentrate pretty hard to hear the original French. It is like a variation on the McGurk effect. If you watch someone's lips while they are saying "Da", and you listen to someone saying "Ba" at the same time, you will hear something closer to "Ga". This illusion is called the McGurk effect.

In this case, you aren't necessarily watching the mouth of the speaker, but you are reading words that prime you to hear certain sounds more than others. The way I like to think of it (which may be wrong, but I don't have time to study the latest theories right now): the subtitles start to fire up certain nodes in the language networks, and this activity acts as a selective filter for the incoming auditory stimulation.

Link to demonstration of the McGurk effect.

read more

7.30.2008

Technology !

Apologies for lack of posting lately. I'm approaching the lockdown phase of my Master's thesis writing.
Chances are, if you've come here, you've already been to the much more popular Mind Hacks and noticed the piece on "neurasthenia" -- but just in case you haven't, I'm linking to it here. It's a fascinating review of some early neurological literature in which the authors expressed concern over the possible mind-numbing effects of some of the newer technologies of the time, most specifically with regards to effects of the quickening of the pace of life that these technologies afforded. This same sort of worry isn't foreign to our generation. The internet is the latest technology to induce nightmares of a future earth populated with flabby, inert cyborgs whose virtual reality has eclipsed the allure of reality itself. Something like that, I guess.

It reminds me of Heidegger's critique of technology. Some folks use Heidegger as an argument against modern technology (i.e. technological devices and their ill effects upon us). Critically, Heidegger's notion of technology is much broader than just the devices that we commonly call technological. When we say "technology" we usually mean "something that makes practical use of advances in our scientific understanding of the world." Heidegger, on the other hand, is referring to a way of living where things appear to us as resources. And if the thing we see resists being seen as a resource, we see it as an obstacle of sorts and set out to find ways to convert it into a resource.

Take file-sharing as an example. Certain advances have made it possible to own an artist's music without paying for it.* And we jumped at the opportunity, didn't we? How did we get here? We can explain it by the lack of moral urgency facilitated by the internet's gift/curse of anonymity. Really, there may be any number of psychological explanations for it. Choose your theory. However you cash it out, I think it is at least reflective of an underlying refusal to see artists as little more than resources (at least while we're involved in stealing their music). Ripping a copy of my favorite band's latest song may feel like I'm giving them a compliment (I could be spending my precious time downloading someone else's music for free, right?), but if it is a compliment, it is backhanded one.

The great danger that Heidegger sees in this is that while in the past cultures may have had different, evolving ways of seeing and understanding the things we encounter in the world, the technological framework is one in which things are gradually divested of meaning altogether. The difference is that in these other, non-technological ways of seeing the world, we got caught up in a hermeneutic circle with things. We developed certain practices toward things based on what they were (and what they were was revealed to us on the basis of fundamental things like our deities, the capabilities of our bodies, the layout of the land, etc.), and these very practices changed the meaning of the things, which again altered our practices toward them. On the other hand, the entire thrust of the technological way of seeing the world is too erase those aspects of things that force us to adopt certain practices (e.g. we erase the aspect of the musician that would make us want to trade something valuable for her artistic work). We need them to be "flexible-shifty", in the words of a great teacher. It's like the hermeneutic circle between us and things loses momentum and falls flat. We're drawn to see things in a way that puts them totally in our power -- as our resources -- and if you're thinking that this sounds like Nietzsche (i.e. will to power) speaking through Heidegger, I think you're right on point.

I'm not too worried about neurasthenia, but I admit that I sometimes do worry that things become more meaningless the better I get at making things mean what I want them to mean. If that last part doesn't sound like a tautology to you, good. If it does, better.

* To be fair, I'm a fence-sitter on this topic. I've been a working musician before, so I know how important it is to get material in the ears of people, even if it means giving it out for free (or happily letting them steal it).

read more

7.03.2008

The case of DF (visual form agnosia)

ResearchBlogging.org Thanks to Mind Hacks, I found out that my supervisor, Mel Goodale, was featured on ABC Radio National's "All in the Mind" series. In the interview (which you can listen to by clicking here), he talks about a patient named DF, whose unique brain damage (i.e, selective bilateral lesions in the lateral occipital complex due to an episode of hypoxia) resulted in the disruption of her ability to consciously identify objects on the basis of their shape or orientation. In other words, her "vision for perception" was compromised. The fascinating thing is that her "vision for action" was spared. In other words, she can't consciously "see" the shape of objects, but she can interact with objects on the basis of visual information about their shape.


* A view of DF's brain damage (taken from James et al, 2003).

A number of papers have explored the behavioral consequences of DF's pathology. In one of these papers, Goodale and Humphrey (1998) presented DF with a slot that could be rotated and set at various orientations. For the first task, they gave her a card and asked her to match the orientation of the slot by rotating the card in her hand. As illustrated in the figure below, DF was unable to match the orientation of the slot on the basis of her perception of it. For the second task, they simply asked DF to post the card into the slot. Her performance was virtually indistinguishable from that of a healthy control.


* Results are normalized to upright orientation to show deviation from a successful performance.

In the same paper, Goodale and Humphrey report the results of another task, in which DF was asked to pick up flat, non-symmetrical objects. Healthy controls typically accomplish this task by choosing stable grasp points (i.e. opposing vectors on parts of the object with high curvature) for the thumb and index finger, with the object's center of mass laying roughly between the two points. Despite the fact that DF is unable to distinguish between these objects, she is perfect at picking them up in an appropriate way (see figure below). Compare her performance with that of patient RV, who suffers from optic ataxia (caused by bilateral lesions of the occipitoparietal region). People with optic ataxia have preserved vision for perception, but their vision for action is compromised in some way. Thus, while RV is able to distinguish between the objects on the basis of vision, she cannot use that information to guide her grasping movements in an appropriate way. It is important to note that RV doesn't simply suffer from a motor impairment. With her eyes closed, she can successfully reach out and touch locations on her body or pick up objects at remembered locations in her peri-personal space. Her impairment is one of online control of visually guided movements.


* The lines connect the two opposing grasp points used by DF, RV, and a control subject.

The case of DF, when considered along with the case of RV, highlights a double dissociation between vision for perception and vision for action. While DF can accurately guide her hand to objects whose shapes aren't consciously available to her, RV cannot accurately guide her hand to objects on the basis of their shape, even though the shapes of these objects are consciously available to her. This dissociation can be demonstrated in healthy subjects by taking advantage of the fact that the ventral visual processing stream ("vision for perception") is fooled by certain optical illusions, while the dorsal visual processing stream ("vision for action") seems impervious to them. Right now, the best explanation for this difference is that the ventral stream uses allocentric coding (i.e. it deals with spatial relationships between objects in the visual field), while the dorsal stream uses egocentric coding (i.e. it deals with spatial relationships between the viewer's body and target objects in the visual field).

I think it's safe to say that my first exposure to the story of DF marks the beginning of my fascination with the brain.

Works Cited

James, T.W. (2003). Ventral occipital lesions impair object recognition but not object-directed grasping: an fMRI study. Brain, 126(11), 2463-2475. DOI: 10.1093/brain/awg248

Goodale, M.A., Humphrey, G.K. (1998). The objects of action and perception. Cognition, 67(1-2), 181-207. DOI: doi:10.1016/S0010-0277(98)00017-1

read more

7.02.2008

Brain and Mind symposium

Over at Channel N+, they have been gradually posting videos from the Brain and Mind symposium at Columbia University. I finally went and checked out the entire program and was delighted to find so many interesting speakers and topics. Click here to see the lineup.

read more

6.24.2008

The cognitive impenetrability of the visuomotor system (part 1)

ResearchBlogging.orgOne thing I think I'll find eternally interesting is the degree to which my brain is doing things that I think I'm doing. That sounds painfully confused, so I'll put it another way. Dennett gave us the personal/sub-personal distinction (thanks, Dennett!). I'm consistently fascinated by how much what I thought was personal is actually sub-personal. I'm not totally sure why it fascinates me so much. I can't deny that the brain becomes more beautiful to me as I begin to "see" my brain in my behavior, and maybe that has something to do with it. There's also that great conundrum -- weakness of will. Why do we do things that, at some level, we don't want to do? Conversely, why do we fail to do things that we want to do? It would be nice to know.

Those are big questions that are perhaps only conceptually related to the question that I really want to ask: How much does explicit prior knowledge of a reaching task help us determine a strategy for that task? Not much at all, according to a few recent studies.

Here's the story. Joo-Hyun Song and Ken Nakayama (2007) gave participants two basic reaching tasks. The “easy” task consisted of pointing to a single target on a screen, and the “hard” task consisted of pointing to an odd-color target among distractor targets. These two tasks were presented in three different conditions: blocked, mixed (i.e. pseudo-randomized), and alternating. They measured reaction times for each reaching movement.




The homogenization effect

In a setup like this, you'd expect the reaction times for the blocked easy task to be fast, and those for the blocked difficult task to be relatively slower. And that's exactly what happens. But when you randomly mix the easy and hard tasks, you get something called the homogenization effect, which essentially is an attenuation of the differences in reaction times between the easy and hard tasks. People get slower on the easy tasks and faster on the harder tasks.

What's behind this homogenization effect? There are (at least) two approaches to answering this. The commonsense approach is to say that explicit prior knowledge of the upcoming trial type gives people the opportunity to optimize their strategies, and since it is missing in the mixed condition, their strategies are suboptimal (i.e. too slow for the easy and too fast for the hard task).
The other approach is to say that people build up an optimized strategy by a sort of short-term motor memory. This memory would be implicit, passively accumulated, and quick to dissipate. Thus, if a certain trial type was repeated several times, the visuomotor system would accumulate an optimized strategy for the next trial (regardless of what you thought was coming next), and that accumulation would be abolished if the trial types were mixed in a random order.

So which one is it? Explicit prior knowledge or cumulative learning? We can figure it out by pitting trial-type repetition against explicit prior knowledge. Song and Nakayama did this by including the third condition: alternating back and forth between easy and hard tasks. We know there will be a homogenization of reaction times for the mixed condition, but will the same thing happen for the alternating condition? Or will explicit knowledge of the upcoming trial type allow the participants to optimize their reaching strategy?

Here are the results.




If anything, the alternated condition looks even more homogenized than the mixed condition. This is a strong argument in favor of the cumulative learning hypothesis. They go on to show that the number of trial-type repetitions actually influences the gradual optimization of motor reaching strategies in a linear fashion until the most optimal strategy is reached. They also show that the suboptimal strategies permeating the mixed and alternated conditions result in curved trajectories toward the odd-colored target (in the hard task). It is as if people (or their visuomotor systems) are guessing and moving before they have selected the appropriate target, and correcting mid-flight if they happen to be going to the wrong target. And all of this is happening while we have the distinct impression that our explicit knowledge is going to give us an advantage. It doesn't.

This study is a first step in uncovering the degree to which the visuomotor system is cognitively impenetrable. More to come soon.


References

Song, J.-H., & Nakayama, K. (2007). Automatic adjustment of visuomotor readiness. Journal of Vision, 7(5):2, 1-9

read more

5.23.2008

switching to thesis mode

I'm in the beginning stages of writing my thesis. It will be on the topic of action selection (specifically, grip selection).
This means that if I write anything at all on this blog in the next few months, it will probably be a review of some paper that I'm going to cite in my thesis.

If you're interested in this kind of stuff, feel free to check up periodically. If not . . . well, consider yourself normal.

read more

5.13.2008

neuroscience is the new buddhism

Today's New York Times had an article on what columnist David Brooks calls Neural Buddhism. The link to the article is here. Before I get into the article, however, I just want to give some context.

As an aspiring christian and neuroscientist, I often find myself getting frustrated over what I see as a push for widening the divide between spiritual and scientific worldviews. It is hard to locate the epicenter of this push, but it really doesn't matter: now that the lines have been drawn, polarization and extremism are inevitable. And now we get to listen to endless arguments in behalf of a rift that, while almost wholly cultural, pretends to be a clear-cut matter of one's ontological commitments and the methods employed to arrive at those commitments. Of course, there are real differences between the scientific method and what could, with only a staggering amount of distortion, be called a spiritual method. And, yes, these real differences in method tend to lead one to different ideas about what exists.

These differences should only matter to those who are concerned about 1) the truth-value of claims made by the opposing sides, and/or 2) the ethical implications of those claims. Perhaps I'm deluded or self-deceived, but it has never been difficult for me to see that scientific and spiritual pursuits are clearly not playing by the same rules with regards to truth, and that for us to demand one universal set of rules for both games is, at best, premature; at the worst, it betrays smallness and intolerance. In the words of the smartest person I know, "truth is independent in [its own] sphere." I'm open to hearing arguments that either science or spirituality should be held to the standards that the other has spent centuries developing specifically for its own ends and goals (which are not the same), but few people seem to agree that this is an argument that even needs to be made.

And as far as ethics go, I think it is telling that many of these arguments cannot help but devolve into nothing more than historical smear-campaigns, where the same people who were so morally bankrupt that they could self-justify their efforts to orchestrate and carry out the wholesale destruction of entire cities and ethnic groups were also supposedly pure vessels of the deep religious or scientific ideals permeating their culture. Jonathan Lehrer, over at the frontal cortex, has a good discussion of some recent research that suggests something that most people can intuitively guess: our moral decisions are not the direct result of a rational comparison between how things are and how things should be. If neuroscience has anything to say about the inherent ethical soundness of one ideology over the other, it should perhaps be this: "What we cannot speak about we must pass over in silence." (nod to Ludwig)

In his article, The Neural Buddhists, Brooks takes stock of this culture war and predicts that a shift is coming. The real war isn't going to be fought over the existence of God, but over the notion that God is someone or something that tells us exactly how to live a good life. Philosophies like Buddhism, says Brooks, appeal to neuroscientists because they (the philosophies) emphasize self-transcendence over revealed guidelines and laws, and self-transcendence is something that neuroscientists just might be able to work with. And in today's scene, if neuroscience takes a side, the fat lady has officially sung, right?

It's important, I think, to point out that this shift would only be a shift in the popular discourse. I don't read Dawkins, Dennet, Harris, etc. as saying that spirituality and self-transcendence are necessarily wrong-headed. I've heard interviews where Dawkins essentially agrees that there is nothing wrong with believing in God if it helps you live a better life, as long as you recognize that God is a metaphor for something else that is entirely based in your biology (one interview was with Bill O'Reilly). However, since most of the popular discourse still fails to appreciate this point, I can still see a shift of sorts happening in the future.

I would welcome this shift, but let's not fool ourselves. Even if it were a battle to be decided by neuroscience (and it isn't), it is silly to suggest, as Brooks seems to do, that the field could, as a whole, be unified in support of one side of this sort of issue. If such a battle comes, there will be a handful of superstar New York Times bestsellers, columnists and bloggers will get excited about it and consolidate only a few of the most sensationalistic points for public consumption, chats at the cafe might be a bit different, a few thousand people may be swayed to switch sides, and then we'll move on. Maybe the next big battle will come. Meanwhile, I believe people inherently want to be good, and that they want to continually improve. If they find a vehicle for that improvement in science, religion, or mystical spirituality, they'll use it, regardless of what Christopher Hitchens or Andrew Newberg have to say about it.

read more

the selection of grip posture

Below the break you'll find the poster from my most recent conference. It summarizes the research I've been doing over the last few months.

Click on the image to enlarge it.



read more

4.30.2008

If you're happy and you don't know it, clap your hands!

A recent paper on unconscious emotional processing (Ruys & Stapel, 2008) has been getting a good amount of attention, and I thought I'd weigh in on it. You can find the abstract here.

The researchers flashed scary (e.g. a growling dog), disgusting (e.g. a dirty, unflushed toilet), or neutral (e.g. a horse) images on the screen, and they did this at two different speeds: quick and super-quick. Both of these speeds were quick enough that participants couldn't consciously perceive the content of the pictures. The participants were simply asked to judge whether the picture was flashed on the left or the right side of the screen.
The participants were then given a barrage of behavioral and cognitive measures. Here's one of them: When given the choice to complete either a "strange food test" or a "scary movie test", the people who were given scary priming images were more likely to choose the "strange food test" over the "scary movie test". The situation was reversed with the people who were given the disgusting priming images. This was, of course, the expected result. After all, who wants to think about food when there are "unflushed toilet" neurons firing in the brain?

The one message I get from this is that networks in the dorsal visual processing stream ("vision for perception"), which carry the bulk of the responsibility for processing object identification and the semantic content of visual information, may have one temporal threshold for explicit awareness and another, lower temporal threshold for implicit awareness (implicit awareness being a sort of non-thematic experience of responsiveness to specifics in the environment). The content may not be accessible to explicit awareness, but the content is still "in the brain", triggering cascades of activity that eventually result in content-based physiological, behavioral, and psychological changes.

(I wonder if the same thing would happen with spatial thresholds? If you introduce noise into the image and gradually reduce or elevate the levels of noise, would you reach a point at which people are still explicitly unaware of the content of the image, but are at some level responding to the content of the image? I might just pilot it and see . . . )

The results are interesting, but it was the discussion that really caught my eye. The authors gave the following description of unconscious emotions:

Emotions might be viewed as unconscious
when they are detected by indirect behavioral or physiological
measures, without being accompanied by conscious emotional
experience. However, what does it mean when only indirect
measures suggest the presence of an emotion?We think that the
range of emotional measures that are affected depends on an
emotion’s intensity. When emotions are full-blown, people become
aware of their emotions by perceiving their own actions
and bodily reactions. When emotions are weak, people fail to
notice their weakly related actions and bodily reactions.
This represents a turning point in the paper. Up to this point, the authors had been defining conscious and unconscious emotions on the basis of whether or not the participant was consciously aware of the content of the stimulus that induced the emotion. Here, they switch gears and define unconscious emotions as those emotions that are not accompanied by "conscious emotional experience", but which can be detected by indirect behavioral or physiological measures like the ones they used.

I assume that all emotions, conscious or unconscious, could be detected by indirect behavioral or physiological measures. If this is the case, then the only distinction being made between conscious and unconscious emotions is that the unconscious ones aren't accompanied by, well, conscious emotional experience. Quite the tautology.

This isn't the worst of it, though. It seems that defining unconscious emotions in such a way that they can be detected by others but not by oneself will lead to undesirable situations. To clarify: the problem isn't that we have behaviors and physiological responses that others are in a better position to notice than we ourselves sometimes are; rather, the problem is that we are allowing 'emotions' to slide into this category. Would it be problematic, for example, if the experimenter told the subject, "You don't know it yet, but you are afraid," and the subject happened to disagree? Or can we guarantee that these unconscious emotions are so self-evident upon reflection that the subjects would never disagree? I don't think that guarantee can be made, and I find that troubling.

The issue, I think, is the ubiquitous practice, within cognitive science, of augmenting or replacing folk-psychological definitions with the neural processes that underly the original referent(s) of the term. There is the experience of the emotion. Then there is the biological basis of that emotion. Two distinct things, and I get anxious when the boundary between them isn't respected. Even before we understood anything about the brain's role in emotion, there really was something like experiencing an emotion, and I'm guessing that this is what makes emotions interesting to cognitive scientists in the first place.

Ruys, K. I., & Stapel, D. A. (2008). The secret life of emotions. Psychological Science, 19(4), 385–391.

read more

4.29.2008

a skilled introduction

I have given 4 or 5 talks during my first year of graduate school. I've learned that one of the hardest questions you can get after the talk is, basically, "What's the point?" Man, I hate that question. The point is that I'm doing it, so it must be cool.

This may not be a hard question for some people who happen to have a clearly articulated result in mind every time they begin a study. But for me people like me -- people who have to find out what they're trying to do by looking at what they've been doing for the past five years -- it is a taxing question.

One of the benefits of (and, indeed, one of my motivations for having) this blog is that it gives me a good reason to engage in this kind of teleological retrospective. I think (and I hope) this blog will be as formative as much as it is informative. For now, I think it's safe to say that my interests have lately converged on the general topic of skills, broadly construed.

I became interested in skills while studying phenomenology with Mark Wrathall (a "California" phenomenologist who studied with Dreyfus and Davidson at Berkeley, spent some time teaching at BYU, and recently moved to the Phil department at UC Riverside). Mark introduced me to Mel Goodale, a neuroscientist (and a darn good one, too) who studies the visual and motor systems of the brain. I ended up in Mel's lab, and now I find myself doing a study that explores how the brain helps us choose between grip postures when we are faced with an ambiguous object (or with an object at an awkward orientation). I started looking at this because I was interested in tool-use; I wanted to explore the brain areas that are responsible for developing "tool" skills and their related grip postures. (The phenomenologist in me is curious about the role of these brain areas in the perception of the tool itself -- i.e., how much does the content of our perceptual experience of a pair of scissors depend upon skills that we have developed with scissors?).

Sometimes I have crazy ideas about skills, even to the point of constructing rough sketches of a theory of perception that is entirely based on skills. Luckily, I've developed the skill of knowing when to shut up.

So, what's the point? Skills. That's the point.

I want this blog to cover a wide range of topics in philosophy and neuroscience, but be warned -- it's possible that, for the time being, the majority of posts will be part of an attempt to convey my passion for the topic of skill.

read more

4.28.2008

raison d'etre

I started my first blog, Maieutica, with the intention of making it into a platform for my latest philosophical/scientific interests. It has gradually and unwaveringly evolved into a semi-humorous diary blog. I get about 300 hits a day on that blog -- a fact that made me feel pretty funny until I installed some free statistical tracking software and discovered that roughly 95% of the visitors were coming to see a picture of a French Bulldog that I pulled off Google Images.

The time has come to resurrect my original intention. Welcome to the brain and the sky -- my philosophy/science blog. Hope you're not here for the bulldogs, 'cause there ain't any.

* A note on the title:
It was inspired by an Emily Dickinson poem.

"The Brain"

The brain is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.

The brain is deeper than the sea,
For, hold them, blue to blue,
The one the other will absorb,
As sponges, buckets do.

The brain is just the weight of God,
For, lift them, pound for pound,
And they will differ, if they do,
As syllable from sound.

read more