Okinawa Computational Neuroscience Course 2016: Thoughts from a Student

campus_photo

OIST Campus, Okinawa, Japan.

By an isolated beach on an isolated island in the East China Sea lies a place where people gathered to study the most complicated biological organ known to humankind – indeed, the organ which has made humankind possible. Shoes were deemed optional by most, even for the morning lecture; indeed, appearances and formalities and the regular trappings of the world seemed to have no relevance here. Here was a monastery of science, nestled in the sub-tropics. Our prayers were programs and our sermons were lectures. It would be preferred by most that you had combed your hair in the morning and bathed appropriately, but beyond that it was not the external you that people were interested in, but the internal you. And what a mixture of internal yous we had.

With 32 students, some half a dozen tutors, and a steady stream of lecturers from all around the world, there was hardly a single moment which wasn’t filled with some kind of fascinating conversation. While in one corner there could be a deep discussion on the merits of different exploration techniques of solution space for parameters of advanced simulations, in the other corner there could easily be a passionate discourse about the free jazz evolution of Miles Davis. And all the while there would be several people tapping away at keyboards, working, and still others relaxing or eating sushi. It was marvellous.

Of course, none of this would have been possible if it weren’t for the Okinawa Institute of Science and Technology (OIST) Graduate University, our generous hosts. They provided us with three square meals a day, lodging, airfares for those who could not otherwise afford it, as well as transfers to and from the airport. That actually meant our yen could only be spent in only one of three places: trinket shops about the tourist places we visited on Sundays, the local Family Mart, or the beer vending machine in the lobby. Can you guess where most of most people’s yen went?

A few of the visiting lecturers were kind enough not just to share their devotion to knowledge with us, but to share their enthusiasm for karaoke or beer. It seemed only natural to do so in that here we were, many hundreds or thousands of miles away from our regular environments, trapped on an island together with only learning and conversation and beer (assuming one can distinguish learning from conversation, and conversation from beer).

It was these three things, or really just the time we spent together, that made this a once-in-a-lifetime opportunity to make life-long friends. Many were made and many will be treasured.

Let scientists be scientists

Thomas Edison experimenting in his laboratory c1920. Photograph from the US Library of Congress collection.

Thomas Edison experimenting in his laboratory c1920. Photograph from the US Library of Congress collection.

The job of the scientist is, first and foremost, to pursue science, not funding. Larry Marshall, the newly appointed head of CSIRO, may have started out in science, but you know he’s spent too many years in business when – in his first official communiqué – he claims that scientists have a ‘duty’ to start companies.

It would be too easy to simply list all of the lovely technologies and life-changing medical advancements basic research has given us, and ask, ‘Would you have liked to have gone without all of these?’

Instead, let’s acknowledge that industry can help fund some basic research, however an overemphasis on the commercialisation of research outcomes is fundamentally opposite to the nature and philosophy of science as a human endeavour.

Imagine a young research student in 1960s Japan. The lab head they are currently working for has assigned them the task of studying small, nocturnal jellyfish. When the jellyfish die and are crushed, they glow when exposed to water. It seems strange, perhaps even interesting, but could anyone have at that time reasonably predicted that the work of this research student would end up earning them a Nobel Prize? Osamu Shimomura, the student who did that work, could hardly have guessed so.

Shimomura helped discover aequorin and green fluorescent protein, which are what made the jellyfish he was studying glow. Isolating the genetic codes for these proteins has led to their widespread use in research, giving medical researchers the ability to observe biological activity at the molecular level and allowing engineers to develop advanced biosensors, among other things.

If Shimomura had a duty to start a company, how could he have sold his jellyfish experiment to a venture capitalist?

S: I want to study these glowing jellyfish. They’re really cool.
VC: What will you discover? How much money can we make from your discovery?
S: I don’t know.

And Shimomura can’t know, nor can anyone! That’s the fundamental nature of basic research and of science generally: to discover what we don’t yet know. Sure, we can sometimes reasonably guess, but we’re only guessing and we can barely do that in most instances of basic research, such as in Shimomura’s case.

Shimomura wouldn’t have been able to get his experiments off the ground if he was told he had a duty to make the outcomes of his experiments commercialisable. He’d have to do something completely different, and ultimately something that wasn’t basic research.

Since this problem of the unknown is common for almost all of basic research, the claim that scientists ought to start companies is essentially to say that they shouldn’t pursue basic research. Do we really wish for this? If we do, the ultimate goal and purpose of science is lost to an impatient sense of utility.

I say ‘impatient sense’ because basic research forms the foundation on which all utilisations of science rely upon. Those biosensors and the incredible detail of information which medical researchers have used to develop new treatments for disease are all – in part – courtesy to a research student in 1960s Japan methodically studying some obscure, nocturnal jellyfish.

Is morality in contradiction with our evolution?

Image

A mother gray langur (Semnopithecus entellus) holding her infant. Photographer: Nevil Zaveri.

Humans, Homo sapiens, are primates. Our unique genetic heritage can be traced back to 85 million years ago, when the distinctive order of Primates arose from mammals. It took more than 82.5 million years for the first hominids (ancestors belonging to our genus, Homo) to evolve. This first species of the genus, Homo habilis, fashioned rudimentary stone tools and lived in small groups similar in size to modern chimpanzees. This small group size afforded two distinctive advantages: protection from predators and enhanced efficiency in food gathering. In other words, our ancestors cooperated to survive (much like we do today). However, it wasn’t for another two million years and many evolutionary steps later, that the first anatomically-modern humans evolved (between 400,000 and 250,000 years ago). During this long stretch of evolutionary time, and even in the relatively short period since, humans have evolved to become the type of animal they are today – flaws and all.

The consequence of this, is that as our societies and technologies have progressed, various traits and features which once ensured our species’ development and survival are now less relied upon or are altogether inappropriate. These include anatomical vestiges, like the recurrent laryngeal nerve, which instead of running directly from the upper parts of the vagus nerve in the head and neck to the larynx (our voice box), it makes a massive detour down to the heart and back up again. This same anatomical vestige exists in the giraffe, where the nerve travels from the top of the neck all the way down, around the heart, and back up again. This unnecessary and indirect route is all due to the fish, where this nerve first developed and had a very direct route. Humans also possess behavioural vestiges like the ‘goose-bumps’ reaction we get when we are cold (which helped keep our hairy ancestors warm) and when we are frightened (which helped make our hairy ancestors look bigger, meaner, and more threatening to potential attackers).

It’s probably unimportant for us to bother changing the routes of our nerves to be more efficient, or removing now-innocuous in-built behaviours like ‘goose-bumps’, but would it be worthwhile (or is it even possible) to change our innate human intuitions which influence what we call morality? In the case of impartialist normative theories, I think we can consciously reason past them to some extent and that this is worthwhile, but we are sometimes working against the grain of our biological hardwiring.

Though this is not always and completely the case. Sun-tailed monkeys, Cercopithecus solatus, a fellow primate, are known to make warning calls to their group when they spot nearby predators. However, this also generates attention from the predator and generally increases the chance of the individual who makes the warning call of being captured as prey. Through an ethical lens, this seems like a heroic case of self-sacrifice for the good of the many. But how many exactly and what is their relation to this many? Since their mean group size is 17 individuals, and these individuals both know and are closely related to one another, it might not have the same gravitas as the archetypal, heroic self-sacrifice we might imagine of some humans – whether historical or mythical figures like William Wallace and Hercules, or more recent activists like Gandhi and Martin Luther King Jr.

This type of kin altruism exhibited by sun-tailed monkeys (and other species), whereby altruism is limited to a few known, especially related, individuals, is also shown in humans. In a psychological experiment on humans lead by Jens Koed Madsen from University College London, participants held a painful skiing position for as long as they wished to, and the longer they held that position, the greater a reward was for a related family member. Participants held the painful position longest for those they were related most closely to, confirming that human altruism is affected by the relatedness to the benefiting individual.

While we may have evolved to be partial to those closest to us, and impartialist theories like utilitarianism and Kantianism go against this evolutionary bias, it is helpful that we have at least some intuitive altruism (albeit not necessarily ‘true’ altruism – it being, in natural contexts, directed primarily or exclusively towards our kin, i.e., kin altruism). Nevertheless, using sound argumentation to extend this altruism may enable us to direct our intuitive altruism towards a larger number of less-related individuals, like the millions around the world suffering and dying from preventable causes.

However, reason might not always win over vestigial moral intuitions, at least at first. Jonathan Haidt’s psychological experiments on moral knee-jerks to what could be reasoned as a morally acceptable instance of incest, demonstrated that (at least in our initial reaction to some scenarios) our intuitions can persist in spite of being demonstrated as unreasonable. This could indicate that reason simply takes its time or needs to be very convincing to have an effect on our thinking.

Where are the big numbers for science education?

Another day, another article.

I wrote again for SBS, this time on science education in Australia:

It’s partly based off an interview I did on Friday with Prof Lawrence Krauss, which you can view in-full here:

I’ll be writing at least one more piece based on this interview for The Australia Times and will post links here when it’s up.