It’s been a while since Google introduced it’s ‘game’ Image labeler, to determine the relevance of search terms for particular images. Google scientists now published a paper describing the next generation of image search. This eliminates the need of humans describing images, and uses computers to analyze the images. It’s capable to determine the similarities between images and images similar to a popular image will get a higher ranking automatically.
Might proof to be very useful, but I’m wondering whether search results delivered by this technique won’t display only almost identical images. You’ll probably need to be very specific in the search terms if the first results don’t display the kind of image you’re looking for. On the other hand, right now sometimes results don’t seem to have anything in common with the terms you entered, so this may indeed be a very large step forwards in visual search.