If you are curious, feel free to check out some of my online profiles. But I cordially invite you to contact me directly (via one of my online profiles, or by clicking this site's "Contact" button). I am looking forward to your message...
Soziologin. Wie wir digitale Algorithmen denken. Und wie uns digitale Algorithmen denken.
2016 - Present
Visiting Scholar / Cornell University
International visitor at the Department of STS sponsored by the Swiss National Science Foundation (Doc.Mobility fellowship for graduate researchers)
Dozentin / CEPV
Co-direktion der Vorlesung "Nouvelles Pratiques de l'Image" für die Höhere Berufsbildung in Photographie (Vertretung)
Inhalte: - la circulation des images: mèmes, GIFs et chatons - données et méta-données - approches sociologies de l'analyse des images - mon image en ligne - une stratégie numérique en tant que narration transmédia
Wissenschaftliche Assistentin / EPFL (École polytechnique fédérale de Lausanne)
Forschungsverantwortliche / Université de Lausanne
Mehr Informationen: http://cdh.epfl.ch/page-102343.html
Research Intern / DHLAB (École polytechnique fédérale de Lausanne)
Academic internship at the newly founded Digital Humanities Lab (DHLAB)
Research topic: Algorithmic texts and linguistic capitalism.
Scientific tasks (in collaboration with and/or supervision from Prof. Frederic Kaplan): - conducting exploratory research - presenting preliminary findings - assisting in the preparation of funding proposals (ERC and Swiss National Science Foundation)
Administrative tasks: - managing the design, content and set-up of the lab's first website
Soziologin / Sociostrategy
Interventions et formations destinées aux organisations soucieuses de se sensibiliser aux défis numériques et d'en connaître les enjeux et les potentiels.
Conseil personnalisé pour particuliers et petites structures sur l'interaction avec les moyens de communications modernes (stratégie numérique et réseaux sociaux, site web simple sur mesure, etc.).
Université de Lausanne
(Doctor ès sciences sociales)
Soziologie und Anthropologie
Activities: Mitglied des Laboratoire de cultures et humanités digitales (LADHUL) und Management des Twitteraccounts @ladhul. Mitglied des Orchestre des Utilisateurs des Reseaux Sociaux (OURS).
Université de Fribourg/Universität Freiburg
lic.rer.soc (équivalent d'un M.A.)
Soziologie, Wirtschaftsinformatik, Volkswirtschaft
Activities: Komiteemitglied der Fachschaft "Sozialwissenschaften" während 3 Jahre.
Co-Präsidentin der Fachschaft "Sozialwissenschaften" während 3 Jahre.
Mitglied des Fakultätsrats während 3 Jahre.
«Did Google Manipulate Search for [presidential candidate]?» was the title of a video that showed up in my facebook feed. In it, the video host argued that upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although – according to the host – they should.
I will address the problems with this claim at a later point, but let’s start by noting that the argument was quickly picked up (and sometimes transformed) by blogs and news outlets alike, inspiring titles such as «Google searches for [candidate] yield favorable autocomplete results, report shows», «Did [candidate]’s campaign boost her image with a Google bomb?», «Google is manipulating search results in favor of [candidate]», and «Google Accused of Rigging Search Results to Favor [candidate]». (The perhaps most accurate title of the first wave of reporting is by the Washington Times, stating «Google accused of manipulating searches, burying negative stories about [candidate]».)
I could not help but notice the shift of focus from Google Autocomplete to Google Search results in some of the reporting, and there is of course a link between the two. But it is important to keep in mind that manipulating autocomplete suggestions is not the same as manipulating search results, and careless sweeping statements are no help if we want to understand what is going on, and what is at stake – which is what I had set out to do for the first time almost four years ago.
Indeed, Google Autocomplete is not a new topic. For me, it started in 2012, when my transition from entrepreneurship/consultant into academia was smoothed by a temporary appointment at the extremely dynamic, innovative DHLab. My supervising professor was a very rigorous mentor all while giving me great freedom to explore the topics I cared about. Between his expertise in artificial intelligence and digital humanities and my background in sociology, political economy and information management we identified a shared interest in researching Google Autocomplete algorithms. I presented the results of our preliminary study in Lincoln NE at DH2013, the annual conference of Digital Humanities. We argued that autocompletions can be considered “linguistic prosthesis” because they mediate between our thoughts and how we express these thought in written language. Furthermore, we underlined how mediation by autocompletion algorithms acts in a particularly powerful way because it intervenes before we have completed formulating our thoughts in writing and may therefore have the potential to influence actual search queries. A great paper by Baker & Potts, published in 2013, has come to the same conclusion and questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“.
Back to the video and its claim that, upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although they should. But why should they show up? The explanation offered in the video is based on two arguments: graphs of comparative search volume based on the Google Trends tool and comparison with autocomplete suggestions from the web search engines Yahoo and Bing.
However, Google Trend seems to have the same flaw as statistics: it can be very informative, but if you torture it long enough, it will confess to anything. Rhea Drysdale has published an informative piece that shows very clearly the manipulative nature of (mis-)using Google Trends as «anecdotal evidence» for «two random queries out of literally millions of variations» the way the authors of the video have. I cannot but encourage you to read Drysdale’s article. (One sentence resonates particularly with me because of what I am currently working on: «Let’s see if mainstream media bothers to do their homework or simply picks up this completely bogus story spreading it further.» Previous experiences suggests the later.) She uses other queries and Google Trends to illustrate how a manipulation of search for another candidate could be just as easily “proved” and concludes that there is no manipulation with a political agenda, just Google’s algorithms at work.
Another article by Clayburn Griffin comes to the same conclusion. He reminds us that «Google Autocomplete is more complicated than you think. It’s not as simple as search volume, though that is an important factor.» But Griffin is convinced: «What I’ve seen, and what’s been reported in the video claiming Google is biased, doesn’t look like manipulation to me. It looks like Google Autocomplete working as intended.»
This is where it gets tricky, because I am as glad as the next person to learn more about how Google Autocomplete is intended to work. Then again, for the point I am trying to make there is no need to go into the anecdotical, no need to know which query for which demographic on which particular search engine in fact does or does not prompt a particular autocomplete suggestion. There are also methodological issues because not only are the algorithmic suggestions based on profiling and personalization, but the algorithms are ever changing. More often than not, the focus on single trees has contributed to rendering the forest invisible. (Still, I must underline that within some great research the trees actually help illustrating the forest, or even the macrocosm – and in this regard: how very fitting that just before starting to write this article I’ve seen raving tweets about an ongoing presentation by Safyia Noble. Please check out her excellent work.)
… no manipulation with a political agenda, just Google’s algorithms at work… But no political agenda does not mean not political. “Google Autocomplete working as intended” is necessarily political, for the very simple reason that algorithmic systems are not neutral.
To be very clear, that power is not necessarily one of manipulating people into having one opinion rather than another, but rather a power of agenda setting. In this regard, it is similar to traditional media, which cannot necessarily dictate people what to think but can certainly impact what people think about. The information we are given in form of Google results may affect our opinions on certain topics, but it is Google Autocomplete that may actually influence on what topics we seek out information: the emergence of an autocomplete suggestion during the search process might make people decide to search for this suggestion although they didn’t have the intention to.
Impossible to address power and agenda setting in a digital context without drawing parallels to the controversy around Facebook Trends. Until recently, little was known about the logics that have made a topic “trending” on facebook. And although a few researchers have been addressing the power of “trending” topics and the lack of knowledge about it, it has not necessarily been considered an issue by journalists, politicians or the general public. But when suddenly some of these logics were revealed, discussions about their adequacy, neutrality and transparency have been sparked (and even a Senate’s Committee has gotten involved). Tarleton Gillespie addresses important issues with regard to the Facebook Trends controversy, many of which are just as relevant for Google Autocomplete.
It is not surprising that the dynamics around Google Autocomplete have followed a rather typical pattern: almost no interest whatsoever in how it works although nothing is known; suddenly, by learning something about Autocomplete, people learn that there actually is something to know; then they want to know more; finally, they demand accountability. That “something to know” may have been ignited by the video claiming political manipulation – or rather: re-ignited. In 2013 already, an ad campaign by UN Women has made people more aware of the sexism in our world Google’s autocomplete function.
Of course, knowing more about how Google Autocomplete works is a good starting point, which prevents us from confusing the (potential) manipulation of search queries with the manipulation of search results. As I wrote in 2013, it is interesting to learn that «autocompletion isn’t entirely automated. Google influences (“censors”, some say) autocompletion globally and locally through hardcoding, be it for commercial, legal or puritarian reasons. (Bing does so, too.)» But ultimately, I am not convinced that yet another trial-and-error reverse engineering attempt that reveals whether a particular expression is or is not suggested in a certain context will contribute to a greater overall understanding.
By the way, this is the main reason why a comparison of results/autocomplete suggestions/… between different web search engines has its limits: it will only offer comparative insights (which, admittedly, might reveal some of what could be otherwise) and mainly keeps feeding into the erroneous idea that there is a single, self-explanatory standard of how technology should work.
As long as we hold on to the idea that a fair, neutral search engine (then again: fair and neutral for whom?) is possible and simply defined by the absence of manipulation, we have not understood neither algorithmic systems nor society nor their intersection.
Oxford University Press: Social Media Guidelines
Filed under ‘Marketing Resources for Authors‘ this website provides a great overview of potential uses (including helpful tips) of different SM platforms. (Simultaneously, it serves as cross-promotion for OUP’s channels.)
Today, I could write something very similar regarding the headlinesinforming us that, according to a recent study, Google’s advertising algorithms discriminate against women. And it is probably a handy opportunity to let you know that my phd research in social sciences – still ongoing – is precisely about interaction with Google’s advertising algorithms…
However, this blog post is not going to be about my research. But when I saw the headlines about “discriminating advertising algorithms” I simply couldn’t *not* blog about it.
Luckily, WIRED has already taken care of asking the very same question I asked in my 2013 blog post about Google’s autocompletion algorithms: who or what is to blame? In a short but discerning piece WIRED explains the complex configuration of Google AdSense:
Who—or What’s—to Blame?
While the study’s findings would suggest Google is enabling discrimination, the situation is much more complicated.
Currently, Google allows advertisers to target their ads based on gender. That means it’s possible for an advertiser promoting high-paying job listings to directly target men. However, Google’s algorithm may have also determined that men are more relevant for the position and made the decision on its own. And then there’s the possibility that user behavior taught Google to serve ads in this manner. It’s impossible to know if one party here is to blame or if it’s a combination of account targeting from all sources at play.
This configuration has allowed powerful companies to present their services as ‘platforms’, phenomenal and simultaneously neutral vessels of communication filled by numerous individual users’ actions only. The complexity of the algorithmic systems at hand – because it is never simply Google’s algorithm (sing.) lest we forget – contributes to make locating accountability impossible if we keep looking for intentionality.
However, the authors of the “discriminating advertising algorithms” research argue that the effects they have uncovered, whether intended or not, are a matter of concern in any case:
… we are comfortable describing the results as “discrimination”. From a strictly scientific view point, we have shown discrimination in the non-normative sense of the word. Personally, we also believe the results show discrimination in the normative sense of the word. Male candidates getting more encouragement to seek coaching services for high-paying jobs could further the current gender pay gap. Thus, we do not see the found discrimination in our vision of a just society even if we are incapable of blaming any particular parties for this outcome.
Furthermore, we know of no justification for such customization of the ads in question. Indeed, our concern about this outcome does not depend upon how the ads were selected. Even if this decision was made solely for economic reasons, it would continue to be discrimination. In particular, we would remain concerned if the cause of the discrimination was an algorithm ran by Google and/or the advertiser automatically determining that males are more likely than females to click on the ads in question. The amoral status of an algorithm does not negate its effects on society.
Btw, the idea of starting with a focus on the “effects on society” and working backward has also been suggested in a recent Atlantic article about Google’s search results (just ignore the arguable opposition of “expert” vs. “neutral” if you can). The article was brought to my attention by Philippe Wampfler who explicitly suggests Google should take responsibility for the company’s decisions by showing face and not hiding behind the ‘platform’ discourse.
And before everyone turns – deservedly – to the Great Glitch of July 8, let me share three more links from my online advertising bookmark folder:
It goes without saying that the three articles are recommended reading. They are all related to online advertising and approach the topic from very different angles.
Then again: several issues of the current ‘advertising algorithm debate’ resemble what has already been discussed, e.g. in the context of other Google algorithms (poke: my 2013 piece on autocompletion and the links within).
And one day I might write more specifically about Google and big data and demographics and targeting and profiling…
Recently, I held a lecture about the digital transformation for the franco-swiss CAS/EMBA program in e-tourism. The tourism industry not being my specialty, and the “social media” aspects having been thoroughly covered by colleagues, I had been specifically asked to convey a big picture view.
I chose to address some overall issues related to ICT (information & communication technology), innovation and society by debunking the following five myths:
Ignoring the digital transformation is possible
Technological progress is linear
Connectivity is a given
Virtual vs. “real” life
Big Data – the answer to all our questions
Each of these points would deserve an treatise on its own, and I will not be able to go into much details in the scope of this article. I nevertheless wanted to share some of the links and references mentioned during my lecture and related to these issues. If you prefer reading the whole thing in French, please go to Enjeux technologiques et sociaux: cinq idées reçues à propos du numérique, which is the corresponding (but not literally translated) article in French.
Myth no. 1: Ignoring the digital transformation is possible
While discussions of online social networks have become mainstream, the digital transformation goes way beyond social media. It is about more than visible communication. It is about automation, computation, and algorithms. And as I have written before: algorithms are more than a technological issue because they involve not only automated data analysis, but also decision-making. In 1961 already, C.P. Snow said:
«If I’d asked people what they wanted, they would have asked for faster horses.»
It is important to keep in mind that a technological innovation cannot be dissociated from a specific social context. As a reminder, I presented the chatbot ELIZA, the reception of its DOCTOR script, and about how our notion of intelligence – human as well as “artificial” – has evolved over time. At hintsight, I realize I could (and maybe should) have spoken about the expression “glasshole”, its stakes and emergence…
Myth no. 3: Connectivity is a given
Internet Cable Map: http://www.submarinecablemap.com/
In August 2013, a viral video called I Forgot My Phone showed the terrible day of a woman without her phone. Why terrible day? She felt excluded by the people surrounding here who would not stop using theirs. Another viral, more recent video called Look up repeated the same subliminal message, even more explicitely this time: there is the technology-mediated “virtuality”, isolating and/or excluding individuals, and then there is the physical, social reality where our actual ties lie. The sociologist Nathan Jurgenson has not only authored the ultimate take-down of these viral videos, he also coined “digital dualism“, the expression describing the wrongful attitude of considering the virtual and the physical two distinct realities. In his essay about the IRL fetish (IRL being the accronym of “In Real Life”) he insists:
It is wrong to say “IRL” to mean offline: Facebook is real life.
The structure of this article – “debunking myths” – is inspired by Antonio Casilli. Its content stems from my interdisciplinary background in sociology, economics and information management. No doubt, the combination of these disciplines has proven very useful for understanding current issues. It seems almost weird now how often I had to justify my choice of the “unusual mix of disciplines having no common ground whatsoever” not so long ago… Oh, and last but not least, because I wouldn’t want to be accused of lacking confidence: I am always happy to share my hybrid knowledge, so please do not hesitate to get in touch.
Cet article esquisse mon intervention dans un module de formation EMBA / CAS il y a quelques jours. Le but était de sensibiliser les participants aux enjeux des technologies de l’information comme sources d’innovations majeures et de les rendre attentifs à quelques enjeux sociaux des TIC. Afin qu’un tour d’horizon aussi vaste soit un tant soit peu digeste, j’ai décidé de le présenter en cinq chapitres qui démontent certaines idées reçues à propos du numérique:
Il est possible d’ignorer le numérique
Le progrès technologique est linéaire
La connectivité est un acquis
Il y a le virtuel et il y a la “vraie vie”
Les “big data”: la solution à tout
En voici ci-dessous la présentation, et ensuite quelques phrases explicatives avec liens/références.
Idée reçue no. 1: Il est possible d’ignorer le numérique
Le domaine du numérique est souvent considéré uniquement dans une perspective communication/marketing, une perspective parfois réduite aux seuls sujets des sites web et des réseaux sociaux en ligne. Et alors qu’il est possible pour une entreprise notamment de se passer d’une page facebook en toute cohérence avec sa stratégie, il n’en est pas de même avec la dynamique et l’évolution numérique au sens large. Ce parce que la révolution numérique ne concerne de loin pas que les “social media”. Elle comprend toute sorte d’automatisation algorithmique. Une citation parlante à ce sujet a été dit par C.P. Snow en 1961 déjà et je l’avais reprise dans un billet précédent (en anglais) il y a deux ans et demi:
« Si j’avais demandé aux gens ce qu’ils voulaient, ils m’auraient répondu des chevaux plus rapides. »
Et rappeler qu’une innovation technologique est indissociable d’un contexte social spécifique. J’ai parlé du chatbot ELIZA, de la réception de son script DOCTOR et de l’évolution de notre conception de l’intelligence – humaine et “artificielle” – et j’aurais également pu (ou plutôt: j’aurais dû!) parler de l’expression “glasshole”, de sa génèse et de ses enjeux…
Idée reçue no. 3: La connectivité est un acquis
Internet Cable Map: http://www.submarinecablemap.com/
Idée reçue no. 4: Il y a le virtuel et il y a la “vraie vie”
En août de l’année dernière, une vidéo virale intitulée I Forgot My Phone montrait la triste journée d’une femme sans son téléphone portable se sentant seule car les gens qui l’entourent (son compagnon, ses amis, des étrangers dans la rue etc.) ne cessent d’utiliser le leur. Le message sous-jacent? Une culpabilisation fondée sur la départition entre le partage “réel”, sans smartphone, et l’isolation derrière un écran. La vidéo Look up, plus récente que I forgot my phone, véhicule ce message technophobe encore plus explicitement: il y aurait la virtualité, isolante, d’un côté et la réalité, sociale, de l’autre… Le sociologue Nathan Jurgenson a appellé cette opposition “dualisme numérique“. Et dans son manifeste contre ce fétiche de la vraie vie (repris en partie et traduit en français ici) il insiste:
Il est faux de dire que IRL (ndlr: “in real life”, i.e. “dans la vraie vie”) signifie hors ligne : Facebook est le monde réel !
La structure en chapitres “idées reçues” a été inspirée par Antonio Casilli. Le contenu de ces chapitres résulte d’une approche globale à ces phénomènes, issue de mon parcours interdisciplinaire en sociologie, économie politique et informatique (/de gestion). Aujourd’hui très utile, ce choix de domaines m’obligeait, au début de mes études, à me justifier constamment vu que ces domaines “n’ont rien à voir l’un avec l’autre”. C’était pourtant il n’y a pas si longtemps… Ah oui, et sans transition, pour pas me faire reprocher un manque de confiance, je spécifie être ouverte à toute autre invitation de partage autour de ces thématiques.
Women need to be put in their place. Women cannot be trusted. Women shouldn’t have rights. Women should be in the kitchen. …
You might have come across the latest UN Women awareness campaign. Originally in print, it has been spreading online for almost two days. It shows four women, each “silenced” with a screenshot from a particular Google search and its respective suggested autocompletions.
Researching interaction with Google’s algorithms for my phd, I cannot help but add my two cents and further reading suggestions in the links …
Women should have the right to make their own decisions
Guess what was the most common reaction of people?
They headed over to Google in order to check the “veracity” of the screenshots, and test the suggested autocompletions for a search for “Women should …” and other expressions. I have seen this done all around me, on sociologyblogs as well as by people I know.
In terms of an awareness campaign, this is a great success.
And more awareness is a good thing. As the video autofill: a gender study concludes “The first step to solving a problem is recognizing there is one.” However, people’s reactions have reminded me, once again, how little the autocompletion function has been problematized, in general, before the UN Women campaign. Which, then, makes me realize how much of the knowledge related to web search engine research I have acquired these last months I already take for granted… but I disgress.
This awareness campaign has been very successful in making people more aware of the sexism in our world Google’s autocomplete function.
Women need to be seen as equal
Google’s autocompletion algorithms
At DH2013, the annual Digital Humanities conference, I presented a paper I co-authored with Frederic Kaplan about an ongoing research of the DHLab about Google autocompletion algorithms. In this paper, we explained why autocompletions are “linguistic prosthesis”: they mediate between our thoughts and how we express these thought in (written) language. So do related searches, or the suggestion “Did you mean … ?” But of all the mediations by algorithms, the mediation by autocompletion algorithms acts in a particularly powerful way because it doesn’t correct us afterwards. It intervenes before we have completed formulating our thoughts in writing. Before we hit ENTER.
Thus, the appearance of an autocompletion suggestion during the search process might make people decide to search for this suggestion although they didn’t have the intention to. A recent paper by Baker and Potts (2013) consequently questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“:
It is not possible to know how many people have typed in stereotyping questions about various social groups, and we do not know if such people represent the majority in a population. As noted above, we would guess that actual numbers of people asking such questions are relatively low, but those who do, tend to ask the stereotyping ones. However, even if it emerges that many people are interested in such questions and click on the auto-suggestions that appear, is there an over-riding moral imperative to remove these auto-suggestions?
Google’s autocompletion has been around for quite some time (almost 9 years to be exact, although the official roll-out was in 2008 only). According to the company, the function suggests what it deems “useful queries” (without defining “useful”, bien sûr) to users “by analyzing a variety of characteristics of your custom search engine”. The volume of searches for a specific search term (from different locations) seems to be the main determinate. But there is no reason to believe that suggestions aren’t, to some degree, personalized (the way search results are). And: autocompletion isn’t entirely automated. Google influences (“censors”, some say) autocompletion globally and locally through hardcoding, be it for commercial, legal or puritarian reasons. (Bing does so, too.)
Women cannot accept the way things are
There is no “veracity” to be established because Google is not the objective mirror it claims to be. Google is a company that works however it chooses. Instead of veracity, let’s focus on accountability. Because Google, the very existence of Google, as well as the specific way it works, is having an impact on our lives.
Who is in charge when algorithms are in charge?
I am not implying the negative stereotyped search term suggestions about women are Google’s intent – I rather suspect a coordinated bunch of MRAs are to be blamed for the volume of said search terms – but that doesn’t mean Google is completely innocent. The question of accountability goes beyond a binary option of intentionality or complete innocence.
Unsurprisingly, Google doesn’t take any responsiblity. It puts the blame on its own algorithms … as if the algorithms were beyond the company’s control.
The company maintains that the search engine only shows what exists. It’s not its fault, argues Google, if someone doesn’t like the computed results. […]
Google increasingly influences how we perceive the world. […] Contrary to what the Google spokesman suggests, the displayed search terms are by no means solely based on objective calculations. And even if that were the case, just because the search engine means no harm, it doesn’t mean that it does no harm.
If we, as a society, do not want negative stereotypes (be they sexist, racist, ablist or otherwise discriminatory) to prevail in Google’s autocompletion, where can we locate accountability? With the people who first asked stereotyping questions? With the people who asked next? Or with the people who accepted Google’s suggestion to search for the stereotyping questions instead of searching what they originally intended? What about Google itself? …
Women shouldn’t suffer from discrimination anymore
[…] given the growing power that algorithms wield in society it’s vital to continue to develop, codify, and teach more formalized methods of algorithmic accountability.
Which I think would be a great thing because, at the very least, this will raise awareness. (I don’t agree that “algorithmic accountability” can be assigned à priori, though). But when algorithms are not accountable, then who is? The people/organization/company creating them? The people/organization/company deploying them? Or the people/organization/company using them? This brings us back to the conclusion that the question of accountability goes beyond a binary option of intentionality or complete innocence… which makes the whole thing an extremely complex issue.
Who is in charge when algorithms are in charge?
Oh, and of course algorithms are not simply “bad”. The proof: Google Autocomplete can also produce nice things, e.g. poetry.
It was not my first TEDx experience, and I enjoyed the scientific emphasis. Below, I will share my personal thoughts and highlights, but would like to underline that the whole program was on a very high level.
Science and marketing
One of my favourite catch phrase comes from entertaining Marc Abrahams and goes more or less like this:
If you do research and you know what you are going to find, you’re not doing research – you’re doing marketing.
Unsurprising for a science-heavy program, there were several speakers sharing their journey from not knowing to actual findings:
Maya Tolstoy, a marine geophysicist with an impressive track record, spoke about her noticing oddly recurrent signals in her data – which would then pave the way for the discovery of correlations between tides and seafloor seismicity. Theoretical physicist Gian Giudice showed the impact of the discovery of the Higgs boson on the calculation of the stability of the universe. (Bad news, by the way: with Giudice’s current premises, the calculations reveal a highly unstable universe; however, it seems we don’t have to worry since our sun will blow up anyway before anything happens to our universe.)
And cosmologist Hiranya Peiris, wonderfully starting off the TEDxCERN talks with a whodunnit about the beginning of the universe, reminded us that all research – even when not revealing a ground-breaking discovery – trumps never leaving the point of not knowing:
“Looking and not finding is not the same as not looking.” (H. Peiris)
Science and resources
The focus of TEDxCERN was, accordingly, not only on the outcome of research, but also on science itself, and on the very importance of enabling and undertaking research:
Computer scientist Ian Foster, who is to be credited with the analogy between research and journey (and, accessorily, grid computing), explained very well how an “ocean liner” such as CERN may be best adapted for certain kinds of research-journeys – but not for all kinds of research-journeys. And scientists who are not aboard an ocean liner (but a sailboat, for example) need to get ahead, too…
“Today, a person can run a company from a coffee shop thanks to cloud computing… what about labs?” (I. Foster)
He presented many cloud platforms empowering small-scale labs and researchers, notably Globus online which allows scientists to focus on the data content rather than data storing, sharing and maintenance.
On the other hand, TED veteranLee Cronin stressed the need for his field, chemistry, to advance not only in sailboats, but to engage and collaborate on ocean liner scale in order to discover the origin of life.
Science and tomorrow’s scientists
Accidentally or not, two of the most personal talks were dedicated to the situation of young academics – although each one from a very different viewpoint: Becky Parker is a teacher of physics and astronomy at Simon Langton School, acting along the lines of the “radical” idea that interest in science can be sown and supported by engaging students in actual scientific projects. LUCID proves her right. Her innovative approach and personal enthusiasm has triggered many “I wish I had had a teacher like her” thoughts and tweets.
In a society where Becky Parkers are an exception rather than a rule, insatiable curiosity and personal experience may make up for a lack of intellectual stimulation in school: Brittany Wenger began studying neural networks (by herself!) when she was just 13 years old, learnt programming and is now providing Cloud4Cancer, a service to detect breast cancer less invasively than standard methods.
The special guest scheduled right after Brittany Wenger’s talk, Will.I.am, also advocated for young scientists: on direct via webcam, he explained why he is fascinated by science and why he encourages young people, no matter what neighbourhood they grow up in, to learn about science and programming. He underlined the importance of engaging every kid in education and science, no matter their background.
(Unfortunately, I spotted some of the white grey-haired men in the public frown upon hearing a black musician (read: “non-scientist”) talking about science in his own words – kudos to the TEDxCERN curators for not sharing this elitist mindset.)
Science and collaboration
SESAME: transnational scientific cooperation
A propos science and elitism: astronomer Chris Lintott‘s talk was a perfect illustration of the benefits scientist can gain from considering laypeople as a complement rather than an opposition. His Zooniverse, regrouping citizen science projects, makes for a great POC of collaborative and/or crowdsourced science.
By affinity, I guess – I am a sociologist – the talks addressing objectivity/subjectivity in science and research were the talks I personally liked best:
John Searle explained that ignoring consciousness was science’s biggest fallacy, which contributed to upholding, unfortunately, the wrong dichotomy of objective science as opposed to subjective consciousness. He argued for the objectivity in subjectivity (and vice versa!) and, accessorily, trashed behaviorism. Which makes me think: for subsequent editions of TEDxCERN, it would be a great addition to give more room to research about science.
Many of the examples mentioned in Londa Schiebinger‘s talk were a perfect illustration of how objectivity and subjectivity co-exist – and thus why science and innovation need to be inclusive of diversity in subjectivity in order to be as objective as possible.
“Gender bias in society create gender bias into knowledge.” (L. Schiebinger)
(For instance: childless urban planners modelled people’s movements by categorising each trip as “work”, “shopping”, “leisure”, “visits” etc. This might work for them. However, for people with care obligations who often zig-zag around the city – bring one child to school, the other one to day-care, and pass by the dry-cleaners etc. … all this on their way to work – single, finite categories for each trip simply didn’t work. For more examples and resources cf. Schiebinger’s project Gendered Innovations at Stanford.)
Science and soprano (and other music)
Listening to Maria Ferrante singing about galaxies and C8H10N4O2 was pure delight and fit the overall program very well. So did the re-edition of the first interplanetary transmitted song Reach for the stars, performed by Collège International de Ferney-Voltaire Choir and International School of Geneva Chorus. Yaron Herman and Bijan Chemirani played together at the very end of TEDxCERN. I remembered Yaron Herman from when he played at TEDxHelvetia at EPFL, a few months ago, where he also shared his fascinating story. A pleasure listening to him again, especially in harmony with Bijan Chemirani.
Last but not least
I need to mention geneticist George Church‘s talk, but I am not embarrassed to admit that I was not able to follow everything he said. What I understood and recall: DNA bears immense potential; transdisciplinary research is the future.
Big thanks to CERN, the TEDxCERN team and everyone else involved for a well-curated, diverse yet coherent program. Thanks to the speakers for making me think, and laugh.
By the way: another account of the TEDxCERN day can be found on TEDxCERN volunteer Alex Brown’s blog.
Oh, and you might want to have a look at the TED Ed videos co-produced with CERN. My favorites: