The Pittsburgh Conference, or PittCon as it’s affectionately known, is one of the biggest lab equipment trade fairs on the planet. There are hundreds of exhibitors dazzling audiences with their latest shiny new instruments.
Everything is that little bit better, faster, more reliable than the competition in some way or another, and as a self-confessed amateur when it comes to most of this kit, it can be hard to see through the spiel to find out what’s really groundbreaking. But a few little things have caught my eye on my wander around the exhibition hall.
Atomic Force Microscopy (AFM) conjures up images of large, expensive equipment confined to the basements of laboratories. But NanoMagnetics Instruments based near Oxford, UK, has made an AFM unit that will sit on the palm of your hand, and sells for the price of an optical microscope. The company is aiming it at users who might have thought AFM was inaccessible to them – either because of price or size of the equipment. The ezAFM can be packed up into a small suitcase and taken out into the field or from lab to lab to be used wherever it’s needed.
Taming GC gas guzzlers
Conserving helium is a major priority as it becomes a progressively more scarce resource. While gas chromatography (GC) probably isn’t one of the biggest consumers of the gas, the shortages can make it difficult to maintain supplies in labs. With that in mind, Thermo Fisher Scientific has developed a new injector module that cuts helium consumption dramatically.
In a normal GC, as the sample is injected into the instrument, a relatively large amount of gas is used to ‘split’ the sample and purge the injection port, while a small amount goes through the column to carry the sample through the machine. In Thermo’s new injector, the splitting and purging is done with nitrogen, and helium is only used as the carrier gas. That means that if, for example, a cylinder of helium would normally last for three months of continuous operation, with the modified injector it will last for three to four years.
Holding the key
— Waters’ iKey
Microfluidics systems – from lab-on-a-chip reactors to diagnostic sensors – are increasingly popping up in all sorts of applications. Waters has developed a series of plug and play microfluidic liquid chromatography (LC) cartridges, called iKeys, that plug in to the source of a mass spectrometer (MS). The company claims that the microfluidic platform is more sensitive than other LC-MS systems, requiring smaller samples, but is also easy to use and significantly reduces solvent consumption.
This month’s episode is a climate science special, as the Royal Society and the US National Academy of Sciences release a joint publication outlining the evidence for and causes of climate change, offering an accessible overview of climate-change science. In this episiode, find out more about this joint report from the lead UK scientist, discover to what extent – if any – extreme weather events experienced here in the UK can be attributed to climate change, and find out what we may be able to do to mediate and prepare for further impacts of climate change.
0:35 Professor Eric Wolff FRS talks about the joint Royal Society and National Academy of Sciences publication
4:25 Youba Sokona discusses the United Nations Framework Convention on Climate Change
7:31 Professor Mark Pelling talks about the difficulties in what can and can’t be attributed to the effects of climate change
9:35 Professor Georgina Mace CBE FRS discusses the options we have to deal with climate change in the future
14:21 Professor Saiful Islam talks about his work in computational chemistry and how this can assist in our efforts to live a little ‘greener’.
What can chemists do to help create a ‘virtual human’? At the American Association for the Advancement of Science (AAAS) 2014 meeting in Chicago, a panel of researchers set out their demands for the chemistry community.
But what is a ‘virtual human’? Projects range from organ-on-a-chip microfluidic devices that might mimic a particular behaviour of a certain organ, through to detailed computer models that map the entire skeleton, or even simulate a human brain. Others take a broader approach, sampling thousands of biomarkers from thousands of healthy individuals to chart the variability and dynamism of human biochemistry.
It’s a subject that exists at the interfaces chemistry, biology, physics and computer sciences, and has obvious medicinal potential in allowing us to develop new drugs in silico or helping us to treat existing patients.
Underpinning all of this work is chemistry, but each project places different demands on the chemistry community. ‘I think what is really important for the study that’s going to look at a whole series of different individuals is to develop techniques, measurement techniques or imaging techniques, that can explore completely new dimensions of patient data space’ offered Leroy Hood, co-founder of the institute for systems biology in Seattle, US. ‘So I think what is really going to be critical is … the development of highly miniaturised, highly effective devices for being able to make measurements. For example, I’d like to have a microfluidic chip that could measure 2500 proteins in the blood accurately. … There are new chemical approaches that I think within 5 years will enable us to do this and then convert these assays on to a microfluidic platform so you can take a fraction of a droplet of blood and in 5 minutes you can make 2500 measurements. It will let you assess wellness for near 50 major organs, so these are the kind of things we need.’
These tests would need to be fast, highly specific and work with minuscule samples, but there is also the demand for these techniques to be cheap: ‘we need to be able to make reagent technologies for genome sequencing, for example, at a much more affordable price’ adds Vijay Chandru, chairman of Strand Life Sciences in India, where he’s been overseeing the development of a virtual human liver. ‘That requires innovation from chemistry. You talk about a thousand dollar genome but it really is much more expensive to actually sequence a genome and I believe that chemistry can bring the cost down.’
Hood also argues that we need to improve our ability to see on the macro scale at molecular resolution in order to better understand our most complex organs: ‘the other thing that I think is really critical is imaging. I think in the end the only way we’re ever going to understand the brain is to be able to do molecular in vivo imaging in the context of whatever operations you’re interested in.’
Peter Coveney of University College London wants chemists to push out of their comfort zone: ‘What I’m interested in is chemists looking at life in a more interesting way. That means studying systems out of equilibrium. It still shocks me how much of chemistry is stuck in an old fashioned equilibrium style approach, and studying complicated non-equilibrium systems begins to address these network issues. And also the patient specificity and accuracy of the calculations that we’re alluding to is something that these people need to address.’
Finally, Christian Jacob, from the University of Calgary, Canada, pointed out that more data and more complicated models will require researcher teams with a wider range of skills. ‘We also need more computational chemists, because there’s actually a huge gap between people who gather the data and who is going to build the metadata, the meta models around the data. Eventually they have to be encouraged to actually be able to work with the data.’
So, can your work help create a virtual human?
Just when we all thought the tube couldn’t get any worse, frustrated commuters in London last week were treated to the news that, due to an engineering mishap, a signal control room for the Victoria line had become flooded with fast-setting concrete, forcing the line to temporarily halt.
— © UsVsTh3m/Twitter
Then things got even weirder… the news reported that when the sludgy mess was discovered, staff had rushed to nearby shops to buy bags of sugar to throw on it. This, they said, ‘stops the concrete from setting so quickly’ so it could be cleaned up before it damaged equipment. This intrigued us in CW office – why sugar? It seems bizarre that something so simple and readily available could have this effect.
It turns out this trick is well known in the construction industry, and builders often use small amounts of sugar or sugary liquid as an instant retardant for concrete on hot days, to stop it setting too quickly and cracking.
Sugar disrupts the setting process by preventing the hydration reaction between water and cement – a key ingredient of the concrete mix containing calcium, silicon and aluminium oxides. Dry concrete mix contains cement together with a coarse aggregate – usually sand or crushed gravel. When water is added it reacts with the cement’s components to make a thick paste which hardens to bind the aggregate together.
Throwing sugar into the mix interferes with the hydration process, although the exact mechanism is still a bit of a mystery. One theory is that the sugar molecules coat cement particles and prevent them clumping together to form a smooth paste. Another suggests that the sugar reacts with aluminium and calcium in the cement to make insoluble complexes. These interfere with the hardening process and leave less Al and Ca available to react with the water. Some sugars work better than others (white refined sugar works well, while the milk sugar lactose only has a moderate retarding effect), and salt acts in a similar way. The cement hydration process itself is not fully understood, and there are likely to be several interactions at play.
The more sugar you mix in, the longer the concrete takes to set, and if the sugar concentration exceeds 1% of the cement mixture it will refuse to harden altogether. While this effect can be useful, it does have its downsides. Because sugar is not usually considered hazardous, dry concrete mix can easily become contaminated while it is being transported. The Australian company CSR, which used to produce both sugar and building materials, once had to recall a whole shipment of cement after it used one of its bulk sugar boats to transport aggregate.
As for the signal control room at Victoria, it seemed the sugar did its job long enough for the concrete to be scooped out – the trains were back to normal the following day.
This month, we give you an insight into the various types of public events we hold at the Royal Society throughout the year. These cover a wide range of topics, from space to microbiology. Find out more about the giant planets in one of our Café Scientifique discussions, discover whether we are losing or winning against the spread of infectious diseases through a public lecture and get a peek at the future of the internet with one of our award winners.
01:13 Dr Christopher Arridge talks about the giant planets of the solar system
06:17 Professor Christopher Dye FRS on the fight against infectious diseases
13:21 Dr Serge Abiteboul, winner of the Milner Award, on the web and computer science
17:37 Professor Chris Dye tells us why science?
For the last couple of years, I’ve been honoured with an invitation to join the judging panel on the Cambridge heats of FameLab, an international competition to ‘find the new voices of science and engineering across the world’. FameLab was set up by the Cheltenham Science Festival (in partnership with Nesta) back in 2005, and aimed to ‘find and nurture scientists and engineers with a flair for communicating with public audiences’. After developing a link with the British Council in 2007, FameLab has been intent on global domination, and with more than 23 countries competing in 2013, seems to be well on the way to reaching that goal.
To test the mettle of our aspiring science communicators, each challenger must prepare a three minute presentation. Time is tight, and there’s usually a strong incentive to stop at the 3 minute mark (in our case, the sound of an awful squeaky dog toy, drowning out your big punchline). As a judge, I’m asked to evaluate each presentation on ‘the three Cs’: content, clarity and charisma.
Everybody (well, almost everybody) who takes the stage is well prepared, knowledgeable and enthusiastic, so it can be hard to divide them. After all, who am I to tell a physicist that her ‘content’ on dark matter isn’t strong enough?
To try to balance out the personal understanding and subconscious biases, FameLab invites a number of judges from different fields, but even if that helps to smooth the content quibbles, the wildly subjective ‘charisma’ category can lead to heated arguments on the judging panel. ‘Clarity’ at least seems a fairly straightforward measure – how well did I understand you? But even that leaves questions – what’s an acceptable level of verbal shortcut before we start ‘dumbing down’?
Organised by the Cambridge Science Festival team, our local final is this week, and so far we’ve seen 20 engaging, thought provoking and entertaining performances from new and established researchers, covering topics including crystallography, materials science, stem cells, worm holes and ‘the planet that never was’. I’m really looking forward to the final, where the 10 best performers will have to face the judging panel again with brand new material, and we’ll pick a winner to send to the UK final later in the year.
Regardless of who will win this week, FameLab as a movement goes to show how excited, enthusiastic and skilled young researchers can be at telling the stories of their science.
If you think you can communicate science with flair, then keep an eye on the FameLab website for local opportunities. If you’re better behind the keyboard than on stage, why not apply for the Chemistry World science communication competition?
The Cambridge University science magazine, BlueSci, have created a playlist of the finalists so far:
And a public vote will put four of the remaining hopefuls through:
Last week, the Science Council released a list of their ‘100 leading practising scientists’. Their aim in publishing the list was to ‘highlight a collective blind spot in the approach of government, media and public to science, which either tends to reference dead people or to regard only academics and researchers as scientists.’
The Science Council is an umbrella that brings together 41 learned societies or professional bodies, including the Institute of Physics, the Society of Biology and of course, the Royal Society of Chemistry. To arrive at their list, member organisations were invited to nominate individuals who ‘who are currently engaged with UK science that other scientists might look to for leadership in their sector or career’. They then convened a representative judging panel to knock it down to a round 100.
The Chemistry World team looked through the list and realised that it contained a number of familiar names (perhaps no surprise, as the Royal Society of Chemistry is one of the organisations called upon to nominate), so we thought we would highlight some of the Science Council’s top 100, explaining how and why they appeared in the pages of Chemistry World…
Not only the winner of the 2008 Chemistry World Entrepreneur of the year for his role in the spin-out company Oxford Nanopore, Bayley’s work has been highlighted in our news and feature articles over the years. Here he’s recognised for ‘ground breaking research into the structures and properties of biological molecules’, including engineered membrane proteins that may allow for affordable and reliable DNA sequencing. He also joined us on the Chemistry World podcast to talk about his work 3D printing tissue-like materials.
The Science Council recognise Cronin as someone whose star is in the ascendant, and he’s certainly not coy about his ambitions.
In a feature on the origins of life, we discussed Cronin’s work on self-assembly and self-organisation but he’s also graced our pages for 3D printing bespoke labware and even miniaturised fluidic devices. His productive team at the University of Glasgow have also published on efficient electrolysis systems for production of useable hydrogen gas.
As one of the fathers of graphene , Geim’s work turns up in almost every edition of Chemistry World since he jointly won the Nobel Prize in Physics with Konstantin Novoselov in 2010. Although the news is not always good, new understanding and potential applications of graphene are regularly published in the scientific literature. From fluid filters, transparent loudspeakers, carbon capture devices, and antimicrobials, we try to keep abreast of the range of applications.
The idea of working with biting insects may make your skin crawl, but John Pickett has been at Rothamsted Research for over 30 years, investigating what it is that makes some people so appealing to pests, while others seem to remain bite free. His work on volatile chemicals and their interactions with insects have regularly made our news pages.
Biotech entrepreneur and microbiologist turned biochemist Christopher Evans may not feature in the magazine for groundbreaking science, but he has a significant impact behind the scenes through investment and entrepreneurship. As well as heading up the Merlin investment partnership, Evans campaigns for more public-private partnerships in science funding. He certainly made his mark on our own Bibiana Campos-Seijo when she met him, according to her editorial in April 2012.
Recognised here for ‘his work in the applications of chemistry to biological and medical sciences and as the principal inventor of the leading next generation DNA sequencing methodology’, we’ve featured Balasubramanian’s work on sequencing and epigenetics, as well as his successful bids to secure conspicuous amounts of funding for multidisciplinary work.
The Royal Society of Chemistry were delighted to see their first female president included in the ‘communicators’ section of the science council’s list, as Yellowlees has been outspoken on issues surrounding science policy, funding and diversity. The election of the first female president of the Royal Society of Chemistry ‘made a big impact,’ she told Laura Howes for our feature on women in science, ‘but in a way it’s a shame that it was worthy of remark.’ She is recognised by the science council for ‘being a public champion for more women in science’, and has helped Chemistry World look back at the history of chemistry at Edinburgh university, where she is head of the science faculty.
As an engaging science communicator with a PhD in thermoelectronics and one of the Royal Society of Chemistry’s 175 faces, it’s perhaps unsurprising that Gallagher has graced our pages, most recently with an insightful analysis of the countries of origin for each of the elements of the periodic table.
He’s also a passionate advocate of equality and diversity in science, telling the Royal Society of Chemistry that ‘science is for everyone and we have hundreds of years of history to correct, we are making fast progress but until equality is achieved across the board, until anyone who wants to pursue science has that ability then we must continue to fight.’
The value of education is highlighted by the science council selecting 10 ‘teacher scientists’, including John Holman, who in addition to being emeritus professor in the department of chemistry at the University of York, is a senior fellow in education at the Wellcome Trust. He has written for Chemistry World on trends in high school chemistry teaching, and was the first director of the National Science Learning Centre in York.
Last but not least is Peter Wothers, a teaching fellow at the department of chemistry, University of Cambridge. The science council recognise him for ‘his role in helping to bridge the transition between sixth-form and university through his leadership in developing the syllabus for the Chemistry Pre-University qualification’, but Wothers is a renowned science communicator, performing to packed houses at the Cambridge Science Festival and delivering the 2012 Royal Institution Christmas lectures. This hasn’t stopped him from finding time to contribute to a Chemistry World feature on ‘inspiring the next generation’, being profiled in our jobs pages and telling the stories of compounds on the Chemistry in its Element podcast.
We’re running a series of guest posts from the judges of the 2013 Chemistry World science communication competition. Here, Chemistry World editor Bibiana Campos-Seijo adds her thoughts on ‘openness in science’.
I’ve very much enjoyed reading the posts by my fellow judges. All interpret the theme of the competition in very different ways but one of the threads I picked up is that most focus how openness affects the relationship between the scientist and others – eg between the scientists and the publishers of information – with Philip Ball calling for a preprint server for chemical papers to encourage debate, engagement and the swift dissemination of information – or between scientists and the media with Adam Hart-Davis drawing on his own experience.
I input ‘openness in science’ into a Google search (most people would never admit to doing this but it helps me focus, refine and polish my thoughts and ideas when I can’t find a way to articulate them appropriately and/or swiftly) and that same thread continues with small variations.
One of the top results is a study launched in 2011 by the Royal Society titled ‘Science as a public enterprise: opening up scientific information’. Its focus was to determine how the sharing of scientific information should best be managed to improve the quality of research and build public trust. The report concerns openness in relation to the interaction between the scientist and others, in this case the public.
Another result that caught my eye was a document titled ‘The value of openness in scientific problem solving’. The focus in this case was information sharing between different scientists and the role that collaborations have in the scientific process. The theme once more is openness in relation to the interactions among scientists.
What strikes me is that there is very little mention of openness in relation to the individual. For me, openness is all about the person and is an attitude that is at the core of what makes a scientist. How s/he then chooses to interact or share with others is somewhat secondary and in many cases is done via pre-existing routes (eg publishing, scientific conferences, etc). Openness and an open mind are vital to understand the scientific process and the challenges it brings, to foster innovation and to embrace and implement new discoveries and technology as they come along. Of course, it is also about being open to discussion and challenges by others but openness at the level of the individual is, in my view, vital to the definition of a scientist.
Obviously there is no right or wrong answer and openness is all of the above. However you choose to interpret it, we are looking forward to receiving your entry and are very excited to be supporting this competition once again.
Bibiana Campos Seijo is the editor of Chemistry World and magazines publisher for the Royal Society of Chemistry. After completing a PhD in chemistry, she ran her own e-learning business before moving into publishing first as a technical editor for the European Respiratory Society and then managing a portfolio of pharmaceutical titles at Advanstar Communications. In 2009, she moved to the Royal Society of Chemistry to lead the Magazines team.
This month, we’re look at some of the skills scientists can develop outside of their labs, which can help them in their careers. You can hear from participants in two of the training courses we run, on media skills and leadership effectiveness. We also handed the microphone over to some budding scientists, who had had the chance to “Ask a scientist” some questions themselves. Finally, we caught up with participants in the Royal Society’s Pairing Scheme, which matches research scientists with MPs, senior civil servants and Lords.
00:45 Dr Vardis Ntoukakis and Dr Elisa Antolin tell us about their experience at the Royal Society Media Skills training day.
05:20 We hear from recent attendees at the Royal Society’s residential training course in Leadership Effectiveness at Chicheley Hall.
12:52 Students from Brittania Village School interview Chris Harrison from the University of Durham,
17:29 Students from Wimbledon College interview Paul Kirk, a mathematician from Imperial College.
20:22 We catch up with scientists and MPs taking part in the Royal Society Pairing Scheme.
"A little more persistence, a little more effort, and what seemed hopeless failure may turn to glorious success." -Elbert Hubbard I've had the great fortune in my life to see a great many wonderful things with my own eyes, including the rings of Saturn, the phases of Venus, a couple of faint, distant galaxies, and a large number of sunsets, sunrises, and lunar eclipses. But as far as solar eclipses go, I missed the only realistic opportunity I ever had to see -- as Cara Beth Satalino would say -- that
Back in 1994, an annular solar eclipse happened just 300 miles from where I was living. While I got to see the partial eclipse that resulted from being off of the ideal path, I'd never seen either a total or annular solar eclipse. But this weekend was my big chance, and I wasn't going to miss it. For the first time, I set out on an eclipse expedition, hoping to catch a glimpse of the spectacular sights that one of my former astronomy students had grabbed hours earlier from Tokyo.
(Image credit: Destiny Fox. Thanks, Destiny!)
As many of you know, I've been preparing for this for a couple of months, and that started with scouting out a prime location. The one I chose was right on the coast, for the earliest possible view from America, right in the middle of the path of maximum eclipse.
Choosing the middle of that path means that I was going to get to see -- if the conditions were ideal -- the Moon pass over the dead center of the Sun, creating a true ring of fire. The place where this was going to happen was False Klamath Cove, a rock-littered area in very northern California. But this place was "only" about 330 miles from where I live today, in Portland, Oregon, and so I made the trip down. About an hour before maximum eclipse, this was the view I had.
Yes, it was somewhat cloudy, and I knew the clouds and fog would be continuing to roll in, but it wasn't hopeless. You see, the clouds were thin enough that the "binocular trick," where you un-cap one side of a pair of binoculars and project the image of the Sun onto a white screen behind it, was still very effective.
As you can see, you were still able to see the Sun's disk, as well as the fraction of it that was obscured by the Moon. But I wasn't going to settle for a projection of the Sun's disk onto a screen; I wanted to see it with my own eyes. And so that meant bringing a little protective eyewear. In addition to my polarized sunglasses, I also brought along two wonderful pieces of equipment: a pair of shade-5 welder's goggles and a shade-10 welder's hood.
Under sunny, high-noon conditions, you need shade-14 to safely look at the Sun. Thankfully, eye protection is additive, so wearing both of these together meant that I could look at the Sun without concern for safety.
I'm not going to lie: other than a green tint, this view was spectacular. The Sun was crisp, the clouds could be seen dancing across its face, and the fraction that was obscured by the Moon was cleanly and clearly visible. I'm definitely going to be using both of these, together, to watch the Venus transit in a couple of weeks.
But for photography? That's never been a skill (or even an interest) of mine, so all I could do was experiment. Placing the shade-5 goggles in front of the camera was clearly not enough.
While the cloud cover was light, as it was in the early stages of the eclipse, it turns out that the shade-10 hood, on its own, was significantly better than the goggles.
You could see, with the camera, that part of the Sun was obscured, but the image was still greatly overexposed, making it virtually impossible to see any detail.
I tried using both the goggles and the hood together. But the combination that worked so well for my eyes was a miserable failure for the camera.
As you can see, the Sun's disk still appeared overexposed, plus now there were problems of multiple reflections between the different surfaces, ruining the image on the camera.
But as we neared the moment of maximum eclipse, and the Sun dwindled to a crescent, slowly creeping around the edges of the Moon, something both wonderful and horrifying began to happen. Thick, foggy clouds began to roll in, as they do every evening in this part of the world at this time of the year. But it meant something wonderful for my feeble photography skills.
My images were suddenly less over-exposed. And as the fog rapidly thickened, I discovered that I no longer needed shade-15 protection to watch the eclipse. I no longer needed shade-10, in fact. At the moment of maximum eclipse, I had nothing but the shade-5 welder's goggles over the lens of the camera, and this was the photo I got.
Digital cameras, of course, get outstanding resolution. So this perfect circle, this ring of fire, actually showed up like this.
There's no way to describe what it's like to see it with your own eyes, but my experience was probably extremely unique, because rather than watching the Moon move off of the Sun, I watched this ring of fire fade away behind some ever-thickening clouds, and disappear from sight.
And that's why even though there are no more pictures from my first eclipse expedition, you can bet it won't be my last!Read the comments on this post...
"But some of the greatest achievements in philosophy could only be compared with taking up some books which seemed to belong together, and putting them on different shelves; nothing more being final about their positions than that they no longer lie side by side. The onlooker who doesn't know the difficulty of the task might well think in such a case that nothing at all had been achieved." -Wittgenstein One of the most fundamental questions about the Universe that anyone can ask is, "Why is there anything here at all?"
(Image credit: Patrick at vignetted.com.)
Out beyond Earth, of course, there are trillions of other worlds within our own galaxy, and at least hundreds of billions of galaxies within just the part of our Universe that's observable to us.
(Image credit: NASA, ESA, S. Beckwith (STScI) and the HUDF Team.)
Explaining where all the matter in the Universe comes from is one thing. What you traditionally think of as something -- that is, the plants, animals, elements, planets, stars, galaxies and galaxy clusters -- that's one question.
How and when all of that got here? That's something we think we can answer.
(Image credit: me, as a New Year's present to you.)
But there's an even more fundamental question than that. In order to have our Universe, you need to start with what, as a physicist, I call nothing.
You need to start with empty spacetime.
And you can start with the emptiest spacetime imaginable: something flat, devoid of matter, devoid of radiation, of electric fields, of magnetic fields, of charges, etc. All you would have, in that case, is the intrinsic zero-point energy, or the ground state, of empty space.
From a physical point of view, that's what nothing is. Only, perhaps perplexingly, that zero-point energy? It isn't zero.
(Image credit: Brian Greene's Elegant Universe.)
If it were, we wouldn't have a Universe filled with dark energy, and yet we do. Instead, spacetime has a fundamental, intrinsic, non-zero amount of energy inherent to it; that's what's causing the Universe's expansion to accelerate! What's even more bizarre than that is the fact that all the matter and energy in the Universe today came from a drop, long ago, from an even higher zero-point-energy state. That process -- reheating -- is what comes at the end of an indeterminately long phase of exponential expansion of the Universe known as cosmic inflation.
(Image credit: Ned Wright.)
The regions of space where this drop in zero-point energy occurred gave rise to regions of the Universe like ours, where matter and energy exist in abundance, and where the expansion of spacetime is relatively slow. But the regions where it hasn't yet occurred continue to have an extremely rapid rate of expansion. This is why physicists state that inflation is eternal, and this is also the physical motivation for the existence of multiverses.
In the diagram below, regions marked with red X's are regions where the drop in zero-point energy occurs, and a region of the Universe like ours comes into existence.
(Image credit: me, created especially for you last year.)
That's the physical story of where all this comes from. Of where our planets, stars, and galaxies comes from, of where all the matter and energy in the Universe comes from, of where the entire 93-billion-light-year wide section of our observable Universe comes from.
From a scientific perspective, we think we understand not only where all of this comes from, but also the fundamental laws that govern it. So when a physicist writes a book called: A Universe from Nothing, I know that some version of this story -- the scientific story of how we get our entire Universe from nothing -- is the one you're going to get told.
It's a remarkable story, it's perhaps my favorite story to tell, and it's certainly been the greatest story I've ever learned. But in at least one way, it's a dissatisfying story. Because the scientific definition of "nothing" that we use -- empty, curvature-free spacetime at the zero-point energy of all its quantum fields -- doesn't resemble our ideal expectations of what "nothing" ought to be.
(Image credit: retrieved from Universe-Review.ca.)
No one sufficiently versed in the science of physical cosmology (and being sufficiently honest with themselves about it) would argue against this: that the entire Universe that we know and exist in comes from a state like this, that existed some 13.7 billion years ago. But you may rightfully ask, "Is that truly nothing?"
This empty spacetime definition of what is physically nothing stands in contrast to what we can imagine as what I'll call pure (or philosophical) nothingness, where there's no space, no time, no laws of physics, no quantum fields to be in their zero state, etc. Just a total void.
This has been the source of much argument recently, as the answer to the physical question of where everything comes from does not necessarily answer the philosophical one. It certainly pushes it off for a while, but it still leaves unexplained the existence of spacetime and the laws of physics themselves. There has been bickering back-and-forth with a handful of physicists and philosophers arguing as to whether this physical story really explains why there is something rather than nothing?
It is a remarkable story, of course, and it explains where every galaxy, every star, and every atom in the Universe comes from, an astouding feat.
(Image credit: Don Dixon.)
But it doesn't explain, existentially, why spacetime or the laws of nature themselves exist, or exist with the properties that they have. In short, understanding how something comes from nothing does not explain how this physical state of nothing comes from an existential nothingness. This question of why, as enunciated by Heidegger, is not addressed by our physical understanding of the Universe. But is it a fair question?
Like the oft-dismissive Wittgenstein, I'm not sure. We make this inherent assumption that both spacetime and the laws inherent to our Universe come from somewhere. Yet our classical notions and intuitions about causality are violated even within our known Universe; do we have good reason to expect that this non-universal form of logic applies to the very existence of the Universe itself? Furthermore, how can something, even figuratively, come from anything else if you remove time?
One can, of course, imagine answers to these questions: an entity of some sort that exists outside of time and thus has access to all times equally, a type of hidden-variable logic that exists as part of reality but requires the knowledge of things that are presently unobservable to us, a higher-dimensional being who sees our entire Universe no differently from how an animator sees the elements of a two-dimensional cartoon, etc.
(Image credit: Chuck Jones / Warner Brothers Studios.)
None of these answers are convincing or compelling, mind you, and I am not sure that the questions do even make sense as far as reality is concerned. But just because we cannot yet know the answers, or whether the questions are sensible as far as reality is concerned, doesn't mean there isn't value to asking them and thinking about them. To me, that's what philosophy is. I would encourage everyone to remember the words of my favorite philosopher, Alan Watts: The reason for it is that most civilized people are out of touch with reality because they confuse the world as it is with the world as they think about it, talk about it, and describe it. On the one hand, there is the real world, and on the other, a whole system of symbols about that world that we have in our minds. These are very very useful symbols -- all civilization depends on them -- but like all good things, they have their disadvantages, and the principal disadvantage of symbols is that we confuse them with reality. For whatever it's worth, when I think of nothing, I think about empty spacetime and the physical Universe: that's where my interests lie, and that's where I believe the knowable lies. But that doesn't mean there isn't something wonderful to be gained from philosophizing. As Alan Watts himself said:
(Video credit: dFalcStudios.)
And as well as this explanation actually describes what I think about the Universe, it didn't come from a physicist. So let's stop accusing each other -- physicists and philosophers -- of being bad at one another's disciplines, and let's work on getting it right. Education is always worth it. Read the comments on this post...
So, you find yourself living in the San Francisco Bay area, and you maybe have a dog who would like to know something about relativity, or you maybe want to someday have a dog who will want to know something about relativity, or you maybe want to know something more about relativity yourself, in case you ever find yourself cornered in a dark alley by a Rhodesian ridgeback who snarls "Explain time dilation to me, or I'll eat your face!" Well, in that case, you definitely want to be at Kepler's Books in Menlo Park on the evening of June 14th, when I'll be doing a book promotion thing for How to Teach Relativity to Your Dog.
So, here's your chance to hear me do the silly dog voice in person, and maybe get a book signed. Emmy won't be making the trip (I doubt she'd do well on a plane...), but I'm looking forward to it.Read the comments on this post...
"The Earth destroys its fools, but the intelligent destroy the Earth."
-Khalid ibn al-Walid Usually, when we talk about terraforming, we think about taking a presently uninhabitable planet and making it suitable for terrestrial life. This means taking a world without an oxygen-rich atmosphere, with watery oceans, and without the means to sustain them, and to transform it into an Earth-like world.
The obvious choice, when it comes to our Solar System, is Mars.
(Image credit: Daein Ballard.)
The red planet, after all, is not a total stranger to these conditions. On the contrary, for the first billion-and-a-half years of our Solar System, give or take, Mars was perhaps not so dissimilar to Earth. With evidence that there was once liquid water on the surface, a thicker atmosphere, and possibly even life, there's no doubt that the right type of geo-engineering could bring those conditions back.
But there's also no doubt that we couldn't, if we were sufficiently motivated, turn the Earth from this...
(Image credit: NASA / GSFC / NOAA / USGS.)
into a world where the atmosphere and the oceans were stripped away. Into a dry, nearly airless world, much like Mars.
Inspired by a recent Astronomy Picture of the Day, above, it's now time to tell you how I would, scientifically, remove the oceans from the planet. It's a process I like to call reverse terraforming, whereby you turn a world the Earth into a world like Mars.
At present, this is difficult for a number of reasons, but here's the biggest one.
(Image credit: Natalie Krivova.)
The Earth's magnetosphere! The same reason that your compass needle points towards the magnetic poles of Earth is the only thing keeping our oceans here on our world! The Sun is constantly shooting out a stream of high-energy ions, known as the solar wind, at speeds of about 1,000,000 miles-per-hour (1,600,000 km/hr).
As the solar wind runs into a world, these ions collide with particles in a planet's atmosphere, giving those molecules enough kinetic energy to escape from the planet's gravitational field.
Of course, we have a powerful magnetic shield from the solar wind thanks to our hot, dense and (partially) molten core. Our planet's magnetic field successfully bends away practically all of the solar wind particles that would be in danger of colliding with us, with the occasional exception of the polar regions, where the ions -- and hence sometimes aurorae -- get through.
(Image credit: NASA, retrieved from Cloudetal.)
Right now, our atmosphere is pretty thick: it consists of some 5,300,000,000,000,000 tonnes of material, creating the atmospheric pressure that we feel down here at the surface. There's so much pressure, in fact, that our Earth can sustain liquid water on the surface.
(Image credit: David Mogk, Montana State University.)
The ability to have liquid water is relatively rare: we need the proper temperatures and the proper pressures! That means we need at least at atmosphere of a certain thickness, a characteristic that Mars, Mercury, and the Moon totally lack. But we've got it, and hence we can have liquid water on our surface.
And do we ever! There's much more water than there is atmosphere. About 250 times as much, by mass, is the amount that the oceans outweigh the atmosphere, meaning that the oceans comprise about 0.023% of the Earth's total mass!
But we could get rid of all that liquid water, eventually, by letting the solar wind in.
(Image credit: NASA / Themis mission.)
When the Earth and Sun's magnetic field align, something like 20 times as many particles as normal make it through. Charged particles are bent by magnetic fields in very predictable ways, and if we could control those fields, we could control how much of the solar wind made it through.
In other words, if we could create a large enough magnetic field on Earth, we could poke a hole in the magnetosphere and allow the solar wind to strip our atmosphere away!
(Image credit: NASA, retrieved from futurity.org.)
Something similar happened to Mars about 3 billion years ago, when its core stopped producing that powerful magnetosphere shield, and its atmosphere got stripped away. When the pressure at the surface dropped below a certain level, the liquid oceans there could only exist as frozen ice or boiled off as water vapor. (And once they're water vapor, they become part of the atmosphere, where it, too, can be stripped away by the solar wind!)
It may not be fast enough for the most supervillainous among you, but one thing's for sure.
(Image credit: flickr user Ole C. Salomonsen.)
If we do poke a hole in the magnetosphere and allow the solar wind in, I'll definitely be enjoying the auroral show!Read the comments on this post...
"The doctors realized in retrospect that even though most of these dead had also suffered from burns and blast effects, they had absorbed enough radiation to kill them. The rays simply destroyed body cells - caused their nuclei to degenerate and broke their walls." -John Hersey Everyone (well, almost everyone) recognizes that radiation is bad for you. And the higher the energy of the radiation, the worse it is for you. The reason is relatively straightforward.
(Image credit: Environmental Protection Agency.)
When high energy particles (or photons) come into contact with normal matter, they knock the electrons off of atoms, ionizing them. This action breaks apart bonds, disrupting the structure and function of cells on a molecular level. And, as you might expect, the higher the energy, the more extensive is the damage that the ionizing radiation can do.
Targeted radiation -- at cancer cells, for instance -- is useful for this exact reason: it destroys the cancer cells. Sure, some of your cells are in the way, too, but radiation therapy is designed to kill the cancer faster (and more effectively) than it kills you.
But too much ionizing radiation will cause too much damage to your body, and spells doom for any human.
(Image credit: CERN / LHC, retrieved from here.)
Here on Earth, the most intense sources of energetic particles are those that come from the world's most powerful particle accelerators: at present, that's the Large Hadron Collider.
But the thing is, you don't know whether a particle accelerator is on just by looking at it. There are few enough high-energy particles even in the most powerful accelerators that the particles themselves are -- and hence the entire beam is -- invisible to the naked eye.
(Image credit: KEK e+/e- LINAC.)
You can't even feel is, much like getting X-rays at the dentist. But, as you may have guessed, there is a trick. An awful, terrible, do-not-try-this-at-home trick. You see, you already know that nothing can move faster than the speed-of-light in a vacuum.
But the speed of light decreases, often quite dramatically, if you're not in a vacuum.
(Image credit: Grimsmann and Hansen.)
This is actually the reason why light bends when it passes through a prism, or a straw/pencil appears bent when you immerse it in a glass of water.
(Image credit: Richard Megna - Fundamental Photographs.)
The relationship between how much an object appears to bend and the speed-of-light in that medium is actually very simple, and tells you that the speed-of-light in water is only about 75% of what it is in a vacuum.
And in many real-world cases, such as from particle accelerators, nuclear reactors, and radioactive decays, we make particles that -- while not faster than light-in-a-vacuum -- can travel faster than the speed of light in a medium!
(Image credit: Matt Howard, Idaho National Laboratory / Argonne.)
And when that happens, when a particle moves faster than the speed-of-light in a medium, light is produced! That's what's going on inside this nuclear reactor and causing this blue glow: the radioactive particles (electrons, in this case) are moving faster than the speed-of-light in water, and hence the particles are emitting Čerenkov Radiation!
What's Čerenkov Radiation?
(Image credit: Cherenkov Telescope Array in Argentina.)
The charged particles, passing through this medium at such great speeds, electrically polarize the medium, which then transitions back down rapidly to the ground state. The polarizing of the medium costs the fast-moving particle some energy, slowing it down, while the transition causes the particles in the medium to emit radiation, and that's where your light -- the Čerenkov Radiation -- comes from!
So how do you tell if the beam is on?
(Image credit: flickr user ohrfeus.)
Horrifically, you stick your closed eye in there!!!
With your eye closed, you should see blackness under normal circumstances. But with the beam on, the high-energy particles entering your eye will see that nice, aqueous fluid that fills your eyeball. And since they're passing through at -- you guessed it -- greater than the speed-of-light in your vitreous eye-fluid, they're going to emit Čerenkov Radiation.
(Image credit: The Gale Group, retrieved from science clarified.)
So if the beam is on, you'll see that light -- that Čerenkov light -- on the back of your eye. And if it's off, you won't.
If that makes you squirm, it should. Physicists used to die from cancer from lack of safety when it came to radiation at alarming rates, and we are no longer (thankfully) allowed to test whether the beam is on or not via methods like this. But this is an interesting bit of history of particle physics that I couldn't not share with you.
And now, in a life-or-death situation, you know how to tell whether the beam is on or not, consequences be damned!Read the comments on this post...
Today is the anniversary of the discovery, by John Tebbutt of New South Wales, Australia, of the Great Comet of 1861. Tebbutt was an astronome.
The comet was initially visible only in the southern hemisphere, but then became visible in the northern hemisphere on about June 29th. I find it interesting that word of the commet spread slowly enough that it was sen in the north before it was heard of.
It has been suggested that this comet had been previously sighted in April of 1500 (that comet is now known as C/1500 H1). The comet will return during the 23rd century.Read the comments on this post...
"I tell you, we are here on Earth to fart around, and don't let anybody tell you different." -Kurt Vonnegut Kurt Vonnegut may have it right for most of us, but not all of us spend all of our time on Earth. A select dozen of us, in fact, have made it to, as Cat Power would sing, to
(Image credit: Apollo 15, Dave Scott, NASA.)
Back in 1971, the Apollo 15 astronauts made huge strides in space exploration, making use of the first manned lunar rover and spending over 18 hours on activities outside of the spacecraft. But two (of the three) crew members experienced heartbeat irregularities on the mission, and the cause was unknown. This was the first time such irregularities were observed in Apollo astronauts, and NASA was, understandably, unhappy about this. Their biomedical research team initially concluded (incorrectly) that it was likely due to a potassium deficiency, and so some modifications were made to the astronauts' diets for the next mission.
A modification, mind you, that would change the place in history of one astronaut forever.
(Image credit: NASA, Apollo 10 official astronaut photo.)
Astronaut John Young is one of the most decorated astronauts in history. With an astronaut career spanning more than 40 years, he flew twice, each, on Gemini, Apollo, and Space Shuttle missions, including the inaugural STS-1 shuttle flight.
But John Young also was a crew member aboard Apollo 16, where he would become the ninth person to walk on the Moon. Oh, and where he was compelled to indulge in a diet very, very high in potassium. In particular, in the form of orange juice.
(Image credit: NASA / Science Source / Science Photo Library.)
Now, John Young had a history with particularities about food. He became something of an astronaut folk hero for smuggling a corned beef sandwich on board Gemini-3 in 1965, but was wholly unprepared for the ingestion of such tremendous quantities of orange juice.
Or, rather, for the effect that said orange juice would have on his body. And NASA was duly unprepared for the effect that would have on John Young's language.
(Video credit: NASA audio; YouTube user jude4021.)
While I imagine that there are few astronaut experiences worse than dutch ovening yourself inside your own spacesuit, you can also imagine that the governor of Florida was not too pleased at a mic'd up diatribe against the signature crop of his state. Words, in fact, that John Young would have to answer for in an official Apollo 16 press conference:
I don't know whatever happened to the gases released by astronaut Young (and others), but if there's any methane on the Moon, you can be sure that this is where it comes from! Read the comments on this post...
"They say 'A flat ocean is an ocean of trouble. And an ocean of waves... can also be trouble.' So, it's like, that balance. You know, it's that great Oriental way of thinking, you know, they think they've tricked you, and then, they have." -Nigel Tufnel Black holes* are some of the most perplexing objects in the entire Universe. Objects so dense, where gravitation is so strong, that nothing, not even light, can escape from it.
(Image credit: Artist's Impression from MIT.)
But there are a number of very counterintuitive things that happen as you get near a black hole's event horizon, and a very, very good reason why once you cross it, you can never get out! No matter what type of black hole you had, not even if you had a spaceship capable of accelerating in any direction at an arbitrarily large rate.
It turns out that General Relativity is a very harsh mistress, particularly when it comes to black holes. It goes even deeper than that, mind you, and it's all because of how a black hole bends spacetime.
(Image credit: Adam Apollo.)
When you're very far away from a black hole, spacetime is less curved. In fact, when you're very far away from a black hole, its gravity is indistinguishable from any other mass, whether it's a neutron star, a regular star, or just a diffuse cloud of gas.
The only difference is that instead of a gas cloud, star or neutron star, there will be a completely black sphere in the center, from which no light will be visible. (Hence the "black" in the moniker "black holes.")
(Image credit: Astronomy/Roen Kelly, retrieved from David Darling.)
This sphere, known as the event horizon, is not a physical entity, but rather a region of space -- of a certain size -- from which no light can escape. From very far away, it appears to be the size that it actually is, as you'd expect.
(Image credit: Cornell University.)
For a black hole the mass of the Earth, it'd be a sphere about 1 cm in radius, while for a black hole the mass of the Sun, the sphere would be closer to 3 km in radius, all the way up to a supermassive black hole -- like the one at our galaxy's center -- that would be more like the size of a planetary orbit!
From a great distance away, geometry works just like you'd expect. But as you travel, in your perfectly equipped, indestructible spacecraft, you start noticing something strange as you approach this black hole. Unlike all the other objects you're used to, where they appear to get visually larger in proportion to the distance you are away from them, this black hole appears to grow much more quickly than you were expecting.
(Image credit: Ute Kraus, Physics education group Kraus, Universität Hildesheim.)
By time the event horizon should be the size of the full Moon on the sky, it's actually more than four times as large as that! The reason, of course, is that spacetime curves more and more severely as you get close to the black hole, and so the "lines-of-light" that you can see from the stars in the Universe that surround you are bent disastrously out of shape.
Conversely, the apparent area of the black hole appears to grow and grow dramatically; by time you're just a few (maybe 10) Schwarzschild radii away from it, the black hole has grown to such an apparent size that it blocks off nearly the entire front view of your spaceship.
As you start to come closer and closer to the event horizon, you notice that the front-view from your spaceship becomes entirely black, and that even the rear direction, which faces away from the black hole, begins to be subsumed by darkness. The entirety of the Universe that's visible to you begins to close off in a shrinking circle behind you.
Again, this is because of how the light-paths from various points travel in this highly bent spacetime. For those of you (physics buffs) who want a qualitative analogy, it begins to look very much like the lines of electric field when you bring a point charge close to a conducting sphere.
(Image credit: J. Belcher at MIT.)
At this point, having not yet crossed the event horizon, you can still get out. If you provide enough acceleration away from the event horizon, you could escape its gravity and have the Universe go back to your safely (asymptotically) flat spacetime. Your gravitational sensors can tell you that there's a definite downhill gradient towards the center of the blackness and away from the regions where you can still see starlight.
But if you continue your fall towards the event horizon, you'll eventually see the starlight compress down into a tiny dot behind you, changing color into the blue due to gravitational blueshifting. At the last moment before you cross over into the event horizon, that dot will become red, white, and then blue, as the cosmic microwave and radio backgrounds get shifted into the visible part of the spectrum for your last, final glimpse of the outside Universe.
(Image credit: ZetaPrints.com.)
And then... blackness. Nothing. From inside the event horizon, no light from the outside Universe hits your spaceship. You now think about your fabulous spaceship engines, and how to get out. You recall which direction the singularity was towards, and sure enough, there's a gravitational gradient downhill towards that direction.
But your sensors tell you something even more bizarre: there's a gravitational gradient that's downhill, towards a singularity, in all directions! The gradient even appears to go downhill towards the singularity directly behind you, in the direction that you knew is opposite to the singularity! How is this possible?
(Image credit: Cetin Bal.)
Because you're inside the event horizon, and even any light beam (which you could never catch) you now emitted would end up falling towards the singularity; you are too deep in the black hole's throat! What's worse is that any acceleration you make will take you closer to the singularity at a faster rate; the way to maximize your survival time at this point is to not even try to escape! The singularity is there in all directions, and no matter where you look, it's all downhill from here.
Like I said, General Relativity is a harsh mistress, particularly when it comes to black holes.
(* -- This is all done for a non-rotating, or Schwarzschild black hole. Other forms of black holes are similar, but slightly different, and much more complicated, quantitatively.) Read the comments on this post...
There's been a bunch of discussion recently about philosophy of science and whether it adds anything to science. Most of this was prompted by Lawrence Krauss's decision to become the Nth case study for "Why authors should never respond directly to bad reviews," with some snide comments in an interview in response to a negative review of his latest book. Sean Carroll does an admirable job of being the voice of reason, and summarizes most of the important contributions to that point. Some of the more recent entries to cross my RSS reader include two each from 13.7 blog and APS's Physics Buzz.
I haven't commented on this because I haven't read Krauss's book (and I'm not likely to), and because my interest in philosophy generally is at a low ebb at the moment (I oscillate back and forth between thinking it's kind of a fun diversion, and thinking I have better things to do with my time). I've been thinking about a new project that's kinda-sorta on the edge of philosophy-of-science type things, though, which has involved a bit of time thinking about why my regard for the subject is at a low ebb at the moment. And seeing the title of Jason Rosenhouse's "The Reason for the Ambivalence Toward the Philosophy of Science" in the "most active" sidebar widget (the post itself isn't so interesting to me, but the framing of the title made me think of something useful), combined with the second Physics Buzz post, combined with what I was writing last week made a bunch of pieces fall into place.
My realization was this: I'm down on philosophy of science type things at the moment because an awful lot of the conversation reminds me of interminable arguments within science fiction fandom.Read the rest of this post... | Read the comments on this post...