Photographs of nothing

You never know exactly what you'll find at an independent gallery, but it's a good bet that it won't be photos of sunsets. Sontag wrote that it's only in the eyes of an amateur that "...a beautiful photograph is a photograph of something beautiful..." Since the 1920s, professionals "...have steadily drifted away from lyrical subjects, conscientiously exploring plain, tawdry, or even vapid material." [1] That in mind, I was almost surprised not to see images even stranger than this one at a gallery in Charleroi, Belgium. The name of the exhibition was "Rien", and it featured scenes from "...une maison longtemps occupée. Les objets accumulés évoquent les préoccupations, les goûts, les activités de leur propriétaire."

[1] Sontag, Susan. "On Photography", Rosetta Books, 1973, pg. 22

Cliffs in East Rock Park, New Haven

Fall colors shine on the dreariest days. Indeed nature photographers love cloudy days for the same reason portrait photographers use light diffusers. Clouds produce a warmer, softer light than direct sunlight, making reds, greens, and yellows pop, and lighting up nooks and crannies everywhere in the image. I took this photo last weekend in East Rock Park in New Haven after a half hour walk in the rain along Mill River, wishing I'd worn gloves. I really liked this view of one of the ridges with the new fall colors blooming up from the bottom of the frame.

Below is the google street view image of this scene

Fixing first-year physics (1/2)

From undergrad through my PhD, I learned physics in traditional lecture courses, and I very much enjoyed it. Is this selection bias? Perhaps. It is no less true. Over the years, whenever I’ve heard tell of the magic of active learning it’s been like nails on a chalkboard to me. I originally set out to write a whole-hearted defense the traditional lecture while violently skewering active learning, however in reading some of the active learning literature, I came to see they make some good points, often with a great deal of data on their side. Based on my experience as a TA, I still believe that active learning approaches like MIT’s TEAL introduce more problems than they solve, but I’m convinced it’s worth at least engaging with the issue. This first post is about how active learning was introduced to physics. The second will be about how it works in practice.

Intro physics is hard, even at the statiest of state schools. Why are these first physics courses so challenging? In many subjects, students might begin a course knowing very little about the material, but in physics, Halloun and Hestenes (HH) argue that beginners know a great deal about the topic; the problem is everything they “know” is wrong. In a pair of extremely well cited papers ([1] and [2]), HH argue that students generally begin intro physics courses with strongly held essentially medieval beliefs about kinematics and dynamics, and traditional lecture courses are unable disendow them of these misconceptions. The implication is that students learn to perform calculations about falling projectiles and inclined planes using F=ma and x=½ a t² and E = ½ m v², but they are unable to integrate these concepts with their original common sense, leading to poor conceptual understanding when asked questions which require more than grasping for formulas.

These finding ignited efforts at Harvard, MIT, ASU, and many other universities to reimagine Physics 101 to promote active instead of passive learning. In 1990, after years of polished lecturing for Harvard’s Physics 11, Eric Mazur asked his students a conceptual question off the cuff, and received only blank stares [3]. “How should I answer these questions according to what you taught me”, a student asked. Discouraged, he asked students to discuss, and within just a few minutes, students agreed on the correct answer. Mazur has become one of the leaders of the active learning movement in physics, and MIT has formalized these techniques into a course known as TEAL: Technology-Enabled Active Learning.

Compared to a lecture course, TEAL seems downright bizarre. Students sit in small groups around circular tables in a specially constructed room with white boards covering the walls. Instructors lecture from PowerPoint slides interspersed with conceptual questions, demos, and small group problem-solving sessions. Trials showed both high- and low-achievers taking the TEAL course learned more than their peers taking traditional lecture courses [4]. Based on these results, TEAL physics has become mandatory for all MIT freshman (except those placing into the advanced track). Clearly MIT should be lauded for its pursuit of better teaching and learning.

But goals are one thing, reality is very much another. In my next post, I’ll discuss the good, the bad, and the ugly of active learning put into practice, based on my experience as a TA.

[1] Halloun, I. A. and Hestenes, D. (1985). The initial knowledge state of college physics students. American Journal of Physics, 53, 1043.
[2] Halloun, I. A. and Hestenes, D. (1985). Common sense concepts about motion. American Journal of Physics, 53, 1056.
[3] Lambert C. (2012) Twilight of the lecture, Harvard Magazine, March-April 2012
[4] Dori, Y. J., Belcher, J. (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? The journal of the learning sciences, 14(2), 243–279.

St. Michael's Church, Wooster Square Park

A photograph is made of many things. Shapes, colors, patterns, and textures in an image can work in harmony to create a scene that really pops, or they can confuse  the viewer with an uninteresting mishmash. Sometimes a color or texture can detract from an interesting shape or pattern. Exposing the scene to create a silhouette is a helpful trick here, especially when it creates some mystery!

Below is the google street view image of this scene.

Back from outer space

Our humble photography blog went off the air a few years ago as the web reached peak photo.  When the Chicago Sun-Times laid off its entire full-time photography staff.  When the photo I sold for stock reached 1200 views and 12 purchases, netting a cool 7 cents. When the New Yorker cartoon showing two teenagers at a museum remarking at one of Rembrandt's "early selfies" didn't seem so funny anymore. When Ansel Adams' quote that photography had become too easy started to seem more prescient than geezerly.

Did technology ruin photography in the same way it ruined music. Just as any Rebecca Black can be autotuned into a generic pop sensation, my two year old iPhone does enough automatic post-processing to turn any sunset into a Sargent. All without the 10 lbs of gear I used to lug across the country. So why have these breakthroughs in camera technology always felt more depressing than liberating?

Maybe because there are only 8 trillion possible 8-bit color photos at 600x800 resolution, and at the rate we're going, I'd be surprised if we haven't knocked them all off already. What room left is there for creativity?

Photography was fun when it was work. When you had to pre-visualize and plan and try and try but mostly fail. And not just because I'm a self deprecating perfectionist. It was fun when my picture was my own, not the same Yosemite selfie a whole bus load of tourists take twice a day at tunnel view.

But in my heart of hearts, I still love it. So Thin Lens is back.

How I learned to stop worrying and play with white balance

Setting  the correct white balance in a photograph is one of the more subtle aspects of photography, especially in nature and landscape photographs where the “wrong" white balance may produce a perfectly realistic looking image and its adjustment is often a creative choice. Before delving into such intricacies, let’s consider what white balance means  and why we may need to adjust it.

Among its other innovations that are difficult to incorporate into a camera, the human visual system tends to perceive an object as the same color under many different types of incident light, an psychological effect  called Chromatic Adaptation.  This is at first surprising because we see everyday objects by simply seeing the light they reflect into our eyes, so you might think that an  object seen under different light would look quite different.

However consider a green apple; what does it mean to say it is green? Due to their exact shapes, the molecules in its skin can be excited strongly by green light much as a child on a swing can be pushed sky high when pushed at the right frequency. The other frequencies of the incident  light pass through these molecules, and are eventually absorbed into heating the apple. If the incident light is white (ie, approximately equal amounts of every color), most of the green light is reflected while most of the other frequencies are absorbed. But if the incident light is not  white, (such as the blueish light of a fluorescent bulb or the reddish light of a sunset), it is not necessarily true that there will be more green light reflected than other colors because there is simply so much less green light to begin with. However we will typically still perceive the object to still be green due to Chromatic Adaptation in the brain.

As a sidenote, objects also emit some light, but at room temperature, that emission is a very low energy light called Infrared, which our eyes can’t see. However very hot objects do emit higher energy light we can see: think of a flame or its embers, and of course the Sun.

Our chromatic adaptation typically kicks in when we are surrounded by the light source, so everything is tinted, say by a red sunset, and our brain simply corrects for the tint so it can still recognize familiar objects. However when inspecting a photograph in an office, we are never  in the same lighting conditions as the original scene. It this case, the tilt of the light illuminating the photographed scene often appears unnatural, even though it is a completely faithful representation of the original scene.

If  all of the photographed scene is lighted by the same, common type of  light, such as tungsten or fluorescent bulbs, or the direct or indirect mid-day sun, camera or post-processing settings can correct for the overall tint and to make the scene appear as if lighted by white light.  Even if the light source doesn’t fit one of these presets,  post-processing software can typically back out the correct white balance correction if you identify a neutral color in the image, say a white shirt or a gray stone. A neutral gray card can be helpful too;  take one photograph with the gray card in the frame, and another after removing it, and set the white balance on the gray card.

The above hints at why white balance might be problematic in  landscape photography. Sometimes the correct scene looks wrong, the wrong one looks right, and there is a creative spectrum in the middle!  Indeed consider these images of the San Francisco skyline through the  fog. The first in the sequence has no white balance adjustment, and in the following ones I set the white balance on the buildings, the sailboat, and the water. The  first is my favorite as I find it the most visually compelling and its  mood best matches what I remember of the scene. But others will  disagree, and the dirty truth is that there is no right white balance correction of an image.  If anything, I think there’s a case to be made that the most faithful  correction is none at all. Even if you desire the final image to look  perfectly realistic, there will be many white balances which subtly  change the mood of the scene, and this particular adjustment becomes  just another part of the creative process.

Below is the google street view image of this scene.

"So…you study string theory?"

Many people hear fundamental physics and think of theoretical physics like string theory. In fact, physics research might be fundamental, or it might be theoretical, or even both, or neither. Pop culture often glosses over the fact that the two terms are not synonymous, they are in fact distinct descriptors and refer to different aspects of the research. I find it useful to illustrate this distinction by classifying research on a 2D grid where the vertical axis goes from experimental to theoretical, and the horizontal one goes from fundamental to applied, as in diagram below.

A schematic diagram of physics research, from fundamental to applied, and experimental to theoretical, as well as several examples in each quadrant. Research topics often start in one quadrant and spread tentacles into others. Often, over time, fundamental research becomes more applied and theoretical research becomes more experimental, but sometimes the opposite happens, or neither.

I don’t mean to classify research topics, such as atoms or galaxies or superconductors, which often grow tentacles in more than one quadrant, rather research types, which are more related to the specific techniques employed. Experimental research typically involves building instruments and collecting and analyzing data to check theories, while theoretical research involves proposing new theories or making predictions from pre-existing ones using analytic or numerical calculations. On the other axis, fundamental research involves the basic laws of physics and the fundamental building blocks of the universe, while applied research involves systems or models more immediately applicable society.

Let’s consider the two quadrants of fundamental research. The fundamental — theoretical quadrant contains work like string theory and supersymmetry, and well as simulations to understand how galaxies arise from fundamental physics. This research often involves long calculations and heady math. Theorists often work in office complexes with endless supplies of white boards, and progress is made by thinking about a problem for a long time. Shoes aren’t required, though many wear socks. To be sure, modern society would not exist without this quadrant. From Maxwell’s electromagnetism to Einstein’s relativity, what was originally fundamental and theoretical eventually became experimental and applied with the advent of radio, optics, and GPS technology. In other fields, such as quantum mechanics, fundamental theories and experiments progressed in concert, but in all cases, the theoretical — fundamental understanding was a crucial piece of the puzzle.

Let’s move on to the experimental — fundamental quadrant, which contains experimental work conducted to shed light on fundamental theories of nature. Sometimes the lines become blurry. Does chemistry belong here? Does biology? I would argue that because the understanding in these fields is largely empirical, and because they have yet to reach the point where a first principles approach is useful, these fields do not belong on our diagram of physics research. Astronomy is another difficult case. Even as recently as a century ago, the classification of galaxies into essentially arbitrary shape categories constituted a breakthrough. Over the past century though, astronomy has undergone a transformation into a field where practitioners observe astrophysical systems and test quantitative predictions from first principles-based theories, or perhaps try to learn first principles theories from observations. Indeed many now refer to it as astrophysics.

Experimentalists build and use complex apparatuses to establish controlled experimental conditions which isolate some fundamental property of nature. Many work in labs and shoes are nearly always required. We are lucky to have the generous support of institutions such as the Department of Energy, NASA, the National Science Foundation, as well as some private foundations, to pursue this often expensive research which is almost always years, decades, or even centuries away from application.

“Do you believe in the big bang?”

The big bang is like biological evolution: the non-technical world makes a big fuss about it, but scientists simply shake their heads and move on. Our understanding of biology and astronomy is so inextricably linked with these theories that there would practically be nothing left if we suddenly decided to reject them out of hand. Indeed belief not the most useful word here, or in science generally. To be sure, science requires at least a working belief that we live in a physical universe governed by mathematical laws, but if a scientific theory were backed up by so little data that it required belief to be accepted, then it’s unlikely to be very useful. There is, of course, an intermediate ground for theories in development or proposed before technology exists to test them. No one would suggest discarding these theories outright, but they aren’t said to be accepted either. Rather, scientists say about string theory and supersymmetry, for instance, that the jury is still out.

In this essay, I will discuss the modern field of cosmology and the evidence which puts the big bang squarely in the realm of fact. First, though, an important point about terminology. Many scientists and science popularizers play fast and loose with what the term big bang actually refers to. Does it refer to the predicted singularity 13.8 billion years ago? Or both the singularity and its aftermath? Just its immediate aftermath, or the resulting expansion continuing into the present day? Or does it refer to the theory which predicts that event? I suspect this ambiguity arises in part out of efforts to wow audiences at public lectures, and in part because the definition of the big bang is really beside the point in the technical literature. Experimentalists focus on building instruments and studying distant galaxies and theorists work to better understand the predictions of quantum field theory and general relativity. Editors don’t make a big deal over the precise vocabulary used in paper introductions. Because of all this ambiguity, and because of the baggage the term big bang carries among the public, I prefer not to use it at all. Let’s just discuss what we know.

I could just describe our modern picture of the Galaxy, the universe and cosmology, but I fear it would be all to easy to simply dismiss as simply a philosophy like any other. So I’ll take a more historical approach and describe the main breakthroughs over the past century that have made cosmology the precision science that it is today.

Our story begins in the early 1920s, by which time a number of poorly understood spiral nebulae, fuzzy spiral blobs, had been observed in the night sky. There were essentially two possibilities. On the one hand, perhaps they were clouds of gas in the far reaches of our own galaxy; on the other, perhaps they were galaxies like our own except farther away than any object ever seen before. The implications were enormous. Is our galaxy all there is, or is it part of a space even more vast with countless others?

Things came to a head in the famous Great Debate between astronomers Harlow Shapley and Heber Curtis at the Smithsonian in 1920. The debate itself was mostly a stunt, scientific disputes aren’t settled by debate but by better data. At the time, there existed measurements supporting both points of view, but none of the data were extremely convincing. The dispute wasn’t settled until Edwin Hubble, using the new 100 inch Palomar telescope on Mount Wilson outside of Los Angeles, made the first measurements of the distances to these spiral nebulae, and demonstrated they were way outside our galaxy. Hubble sought and monitored a type of pulsating stars in these nebulae called Cepheid variables, which are known to pulsate with a period proportional to their intrinsic brightness. Then by comparing their apparent brightness, more distant objects generally appearing dimmer, their distances could be obtained. Many of the spiral nebulae were millions of light-years away, while our galaxy is (and was) known to be only about 50,000 light-years from side to side. The debate was settled, the universe was much larger than we had known.

For his next trick, Hubble turned his attention to even more distant galaxies, and the results were just as shocking. With only a few exceptions, all appeared to be flying away from us at staggering velocities, hundreds or thousands of kilometers per second. Fast enough to travel around the earth in less than a minute. Moreover more distant galaxies were flying away from us faster…linearly faster. Exactly what you’d expect for a uniform expansion of space itself. It was as if a rubber sheet with dots on it was being stretched out; no matter which dot you’re sitting on, all others are moving away from you at a rate proportional to their distance. This relationship is known as Hubble’s law, and the coefficient of proportionality, in units of velocity per distance, is knowns as the Hubble constant. Modern measurements place this value at roughly 70 km/s per Mpc, where 1 Mpc = 1,000,000 parsecs.

How can the universe itself be expanding? What does that even mean? A decade earlier, Einstein had applied the equations of General Relativity to the universe as a whole and found that static solutions are impossible, which suggested to him that his equations were not correct. He found that a single constant added to the central equation of the theory was sufficient to ensure a static solution, a cosmological constant. After hearing of Hubble’s results, though, Einstein famously abandoned that addition, calling it his greatest mistake. It wasn’t really a mistake per se, given that there wasn’t any data at the time to test his prediction, more a missed opportunity. He could, after all, have gambled and predicted the expansion of the Universe, and perhaps won a second Nobel prize for his trouble.

Despite this early progress, cosmology largely languished as a research backwater for decades, and for a time fell out of all memory, until the most unlikely creature of all, an engineer, made a breakthrough. In the course of developing ultra sensitive radio receivers at Bell Labs, Anro Penzias and Robert Wilson developed a horn antenna which seemed to pick up a very faint noise which they couldn’t explain.

Radio astronomers often quantify the power of a radio signal by the equivalent temperature of a thermal source emitting the same amount of power. Let us build some intuition here. Any object at a non-zero temperature emits electromagnetic radiation. The hotter it is, the higher frequency of the radiation. Objects at room temperature emit predominantly in the infrared band, while objects at a few thousand degrees, like the sun (and like tungsten filaments in incandescent bulbs), emit in the optical band. At the other end of the spectrum, only very cold objects emit predominantly in radio waves. Note of course that we are only talking about electromagnetic waves emitted by the random jostling around of atoms. By running currents through a wire, substantially more powerful radio waves may be produced which are not random at all, and are useful for transmitting Lady Gaga lyrics, among other things.

The background radiation detected by Penzias and Wilson was equivalent to that emitted by a very cold object, within a few degrees of absolute zero. 3 kelvin to be precise. Moreover, it was almost exactly the same brightness in any direction they looked away from our galaxy. Their discovery set off a flashbang in the cosmology community. Based on calculations of how atoms could have formed in the hot and dense early universe, Ralph Alpher and Robert Herman calculated that the ambient temperature of empty space today would have to be roughly 5 kelvin (See also this and this).

The exact temperature was somewhat uncertain, and later estimates ranged from a few kelvin to tens of kelvin, but Robert Dicke immediately recognized that the observed 3 kelvin radiation was cosmic in origin, corresponding exactly to the aftermath of that hot, dense early universe, what had been mocked as the the big bang by astronomers who opposed the theory. But much as it’s a losing battle to keep non-scientists from referring the Higgs boson as the God particle, the name big bang stuck.

Two additional discoveries made in the early 1990s confirmed these results and began the era of precision cosmology. Both came from a satellite-born experiment named the Cosmic Background Explorer (COBE) designed to better measure the cosmic microwave background (CMB) discovered by Penzias and Wilson. The first key result was a precise measurement of its spectrum, the amount of power emitted as a function of frequency. Purely thermal emitters have a very characteristic spectrum known as the blackbody curve or the Planck function. Radiation emitted by stars and galaxies often looks somewhat thermal, but atoms and dust inevitable emit or absorb some radiation at different frequencies, always resulting in an imperfect frequency spectrum. In fact, the only known source of such a perfect thermal spectrum is the hot, dense early universe when everything was a sort of primordial soup, or perhaps more precisely, a purée. All scientific opposition to the hot, dense early universe model, ie, the big bang, evaporated after the publication of this spectrum.

The second important discovery was the icing on the cake. On top of clear evidence of this exotic hot, dense, homogenous early universe, COBE saw hints of how the modern clumpy universe could have emerged. To the imprecise radio antenna of Penzias and Wilson, the cosmic microwave background appeared just as bright in every direction, but the more sensitive COBE satellite distinguished two interesting patterns. First, after subtracting the sky-averaged intensity corresponding to the 3 kelvin radiation, they observed a bi-polar pattern in the sky. That is, the sky was uniformly brighter in one direction and fainter in the other, just as you would expect due to the motion of our galaxy relative to other galaxies. A doppler shift. It’s as if our dot on the expanding rubber sheet is actually moving slowly across the surface while the balloon is expanding, so dots on one side don’t seem to be receding as quickly, while those on the other recede faster than they otherwise would. Then after subtracting this bi-polar pattern, we see an incredible, random-looking field of fluctuations. This anisotropy of the cosmic microwave background shows us the slight inhomogenieties in the nearly smooth early universe.

Over time, we predict, but have yet to directly observe, that the denser regions slowly drew in more matter and collapsed due to gravitational attraction, eventually forming stars and galaxies. More recent CMB surveys by the WMAP and Planck satellites have confirmed and extended these results with exquisite precision, and truly made the past two decades the golden age of cosmology.

Lastly, I would be remiss if I didn’t mention the most shocking discovery in cosmology over the past decades: the acceleration of the expansion of the universe. In 1997 and 1998, two groups attempting to reproduce, refine, and extend Hubble’s original measurements observed that galaxies a thousand times more distant that Hubble’s appeared only half as bright as they should given their distance. Recall that Hubble’s law relates the recession velocity of a galaxy to its distance, and thus, to its apparent brightness. An obvious possibility was obscuration by dust, but both groups went to great lengths to demonstrate this was not the case. Dust absorbs preferentially red light, drastically altering the spectrum, but the spectra of these galaxies appeared normal. They were just fainter than their other properties suggested they should be.

The consensus conclusion is that not only is the universe is expanding, but it is accelerating, due to exactly a term in Einstein’s field equation like the cosmological constant he artificially added. But this time with living proof. The acceleration of the universe is often described as a dark energy, often meant as a more general theory than a cosmological constant, but it remains a big question mark. Don’t be discouraged, though. Many of the best breakthroughs in physics have occurred after observations of the unexpected. How fortunate we are to witness one of them!