I haven't read the source article, which is featured on the cover of the most recent issue of Neuron, but here's my rough understanding of what they did. Human subjects were shown 400 random 10 x 10 pixel black-and-white images for a period of 12 seconds each. During viewing, the blood flow in their primary visual cortex was recorded using fMRI. At some later point, the subjects were shown one of the 400 random images, the blood flow was monitored using fMRI, and a computer algorithm compared the previously recorded blood flow pattern to the current one to try to determine which image the subject was looking at.
Pretty nifty, huh? Here is another image showing the pattern the subject was viewing at the top and the reconstructed images based on the fMRI analysis:
I'm actually surprised that activity at such a coarse resolution, i.e. blood flow, is replicable enough between viewings of stimuli to allow for this kind of accuracy. Blood flow is an indirect measure of neuronal activity, and I wouldn't expect it to be very uniform from stimuli to stimuli, but apparently it is, at least in this sort of experimental setup, although I wonder how robust it is over time and with more complex stimuli.
And then there's the usual pop science silliness that usually comes with these sorts of stories:
According to the researchers, further development of the technology may soon make it possible to view other people’s dreams while they sleep.I think it may be too soon to use words like "soon". Dreams are not a result of directly viewing visual stimuli, so it is unlikely that activity in the primary visual cortex will be the same when, for example, a subject is looking at a giraffe in a waking state as when they are dreaming of a giraffe. But who knows? Maybe in 10 years we'll be able to record our dreams and play them back for our friends.