The importance of real-time search

Net Navigator:  Google’s vice-president of search product and user experience, Marissa Mayer. Photo: The Guardian

Don’t let Marissa Mayer worry you, but she would like your camera, phone and surroundings to tell Google a bit more about you and the world around you – and do it more often. As vice-president of “search product and user experience” at the search giant, she thinks we’ve only just got started on search—and that sensors, such as those built into those objects you may own, are the way forward.

Presently, search is limited to what is strictly online, put there by people: “What we offer today is very different from, say, (what) a friend of yours who might have access to a lot of facts or information (could), so the interaction is a lot less human and prompt and responsive,” she explains. The first stage of search involved text on web pages; the second stage, which we’re in now, does involve humans, who are helping identify images and adding context to web pages, which makes the web appear knowledgeable.

Mayer, 34, gives an example of the latter: “We’re starting to see things (in search) that appear intelligent but actually aren’t semantically intelligent. So, for example, if you type GM into Google, you’ll probably get General Motors. But if you type GM foods, we actually give you pages about genetically modified foods and General Mills.”

But there’s a potential third form of search, she explains, which uses the sensors built into devices around us. “I think that some of the smartphones are doing a lot of the work for us: by having cameras they already have eyes; by having GPS they know where they are; by having things like accelerometers they know how you’re holding them.” Buildings and infrastructure typically have sensors built in too. Strain gauges on bridges tell how well they are handling the stresses of their everyday existence; there are temperature sensors on cars, while rain gauges and gas samplers at any location will give you a picture of the world.

Real-time revelations

Which leads us to real-time search—a space where Twitter, in particular, has pulled ahead of the bigger company. Although it’s emphatically unsaid, it’s clear from studying the reactions of Mayer—and other senior people at Google—that the little company has unsettled its bigger, broader rival. Of course, Google had its own attempt at real-time many-to-many messaging: Jaiku, which it bought in October 2007. But Twitter was already riding the rising wave, and Jaiku quickly fell by the wayside; its developers open-sourced the code in March and have moved on to other things. Which, until those phones, cameras and gauges start announcing their data over the web, doesn’t leave many sources of real-time information.

Mayer says: “We think the real-time search is incredibly important, and the real-time data that’s coming online can be super-useful in terms of us finding out something like, you know, is this conference today any good? Is it warmer in San Francisco than it is in Silicon Valley? You can actually look at tweets and see those sorts of patterns, so there’s a lot of useful information about real time and your actions that we think ultimately will reinvent search.”

Spot it? “Tweets”. It’s the only time in the conversation, and the half-hour talk Mayer later gives to an audience of entrepreneurs, where she mentions by name any rival product or brand. She never says Microsoft or Bing or Internet Explorer when asked about the rival’s search or about browsing. Tweets implies Twitter, the company Google is often expected to be sniffing around to replace its missed chance with Jaiku.

Making tweet music together?

So is Google talking to Twitter about integrating real-time search, which Twitter got by buying Summize last year? Mayer stonewalls. “I can say that we think that real-time search is very interesting,” Mayer says.

She would know. She is a key player at Google; one of its earliest employees. “...the company tries to keep its teams small, she says, adding: “By keeping smaller you avoid a lot of that bureaucracy that tends to snuff out an idea early.” But there is also the fact that Google is stuffed full of people who just love to experiment on its users. Google Mail uses a very slightly different blue for links than the main search page. Its engineers wondered: would that change the ratio of clickthroughs? Is there an “ideal” blue that encourages clicks? To find out, incoming users were randomly assigned between 40 different shades of links—from blue-with-green-ish to blue-with-blue-ish. It turned out blue-ness encouraged clicks more than green-ness. Who would have guessed? And who would have cared? Google, of course, which wants to get people clicking around the net.

Clicking, of course, ideally using its browser, Chrome, launched last year. Launched why? “Our engineers noticed that browsers didn’t seem to be evolving very much any more. No one was paying any attention to Javascript, even though pages were using more and more Javascript.” Chrome focuses on running Javascript (such as you find in Google products..) really quickly. So has it lived up to expectations? What were those expectations? “We have our goals in terms of users, numbers of versions.” And has it met them? “Yes”. Exceeded them? “It’s been pretty much on par. We’ve become pretty good at predicting how users will respond to something with original installs and downloads.”

Recognition factor

And finally is she surprised by how slowly image recognition has evolved, given the effort put into it, compared to voice recognition? After all, Google Image still asks for human help. Why haven’t the computers figured it out yet? “For voice, language is language.

Sometimes a new word crops up, and then you have to figure out how to recognise that.

With images, the problem is fundamentally changed.... Now, with the dawn of YouTube and digital photography and 100bn images being uploaded to the web every year, you actually need to be able to identify all 6 billion people....”

What’s also lost in a still photo is the contextual information—movement, location, voice—that reality offers. “With a still image all you have are the pixels, and those pixels might look a lot like a photo of someone else, so I do feel for the image recognition people because their problem has become significantly harder in the internet age. We’re not getting closer to a solution. The solution just moves further away.”

The areas of success are where photos get metadata—geotagging—or where humans help: “You take one picture of your family at Christmas and tag this little red spot as ‘Meredith’, and the system says: ‘Every time we see something that’s the same shade of red intensity, in all of their pictures, those are Meredith.’ A lot of people think that’s cheating, but I don’t really think it is because that’s what humans do.

“So, image recognition is really trying to harness those things; and the sensor revolution we’re seeing—GPS that’s attached to your phone, to a camera—really can help us develop image technologies that work a lot better. It means we make the problem simpler.”

Liked the story?

  • 0

  • 0

  • 0

  • 0

  • 0