Imagine a scanner the size of a grain of rice, built into your phone. You go to the grocery store and point it at something you want to buy. If it’s an apple, the scanner will tell you what variety it is, how much vitamin C it has and how long it has been in cold storage. If it’s a fish, you’ll learn whether it’s really orange roughy or just tilapia being passed off as something more expensive. If it’s a muffin, the device will tell you whether there’s gluten in it.
Every substance reflects (and absorbs) light in a different way, and the graph of that reflected light is a kind of optical fingerprint. Except it’s better. Identifying a food and its characteristics based on the scan is a twofold job: First, you simply match the optical reading to a library of known objects; second, you read the topography of the graph to zero in on specific characteristics. The two together can tell you an awful lot about what you’re scanning.