Table of Contents

Please see Remote Pulse main page if you haven't already!

Wow, much more to read and learn in Wikipedia pages.

Use a Raspberry Pi camera?! $30 for full access to 90 FPS data using their straightforward interface.

This work uses the same averaging algorithm by ——–, but rigorously defends its optimality and combines it with Lucas-Kanade point tracking to track non-still individuals. are not real-time, and don't seem to and don't explain the underlying noise characteristics of the video require either an assumption about the underlying dataset (bandpass heart rate frequencies) or ove

For our Pattern Recognition class, my partner (Billy Keyes) and I implemented an algorithm to track cardiovascular vital signs (like pulse and heart-rate variability) from video of exposed skin. Our current version uses OpenCV for face tracking and tracks the variations in an average over a region-of-interest (in this version, a sub-rectangle of the face). This readout is very similar to SPO2 sensors that you might find in hospitals, except for a fraction of the cost and much more convenient to use! Future plans are to improve robustness with Lucas-Kanade point tracking.

Final Paper Excellent NASA Paper talking about determining pulse from radar, infrared, and visible spectrum. Gives a good overview, but doesn't do much dreaming.

Cameras

Astronomers do picture stacking to make their end pictures better. Some use Toucan (a philips webcam), whereas others use a 5MP imager, like this one

MIT guys used a DSLR camera, but it looks like it did h.264 compression.

Just do a few tests and be done with it!

Potential Uses

Daytime Star Viewer

Non-Contact Heart Rate Detector for Personal Fitness

Facial Verification

Other ideas

Old Stuff

OpenCV Installation

OS X

 * Cross-Platform deployment (you know, dmg's and stuff): http://qt-project.org/doc/qt-4.8/deployment.html

OpenCV Compiling Cross-Platform

Other Stuff

Getting Rid of Poisson Shot Noise from the CCD

Active Appearance Model Tracker (Getting rid of motion noise)

Getting FaceTracker working on Windows

How Behind I Am

Hey Jan,

I am ashamed to send you this code, but here you go. Didn't make time to clean it up this weekend.

How about this. I'll explain to you the algorithm real quick. Maybe you want to trade knowledge with me on bioinformatics? Oh, wow, bioinformatics is kinda what we're talking about here! Yeah, could you maybe give me the 2-minute explanation of the relation of de Brujin graphs to de novo assembly?

So, for this algorithm (and any pulse detection algorithm), you need to amplify small changes in color in local areas of change on open skin. The face and hands work well because there's lots of blood vessels going there I think and they're not covered by hair. Literally, hair will screw this method up, it's that sensitive. :)

Anyways, to do that, assuming the person is perfectly still (which doesn't happen because their heart is beating, and slightly moves their face too), you subtract the “mean” value from every pixel and then multiply the leftover “variance” (which you hope is only the change in color of the face). On this level, noise from photons hitting the camera CCD is actually kind of important, but they show up as spotty noise whereas the blood color is pretty consistent (which is why the paper just smoothed the whole image). A good alternate video is on my website here. Anyways, to find the mean value, often people use a running exponential filter or a boxcar filter (add em up and divide by N, aka the average). The running exponential filter is easier to program:

currentFilteredValue = (alpha)*currentRawSample + (1-alpha)*lastFilteredValue

whereas for a boxcar you have to keep an array of numbers. Someday I'll just make a well-documented set of functions like my boss has right now, but that's beside the point.

So, for the program, just do this for each pixel: -Get mean value -Subtract mean value from current value -Multiply result by a magnification factor. -Hope you have enough light and the person didn't move. -Redraw the to the image buffer and display result.

Fun times! I threw the attached code together in time for our final demo two semesters ago and was waiting for a much better method to come out, which the MIT paper did. I'm still understanding their solution, because ours is fundamentally flawed (the person moves!). Whoo, time for work.

Looking forward to still talking! The other 3 interested people never emailed back :/

Additional Research