The key to the speed and reliability of Google search is cutting up data into chunks, its top engineer said.
All machines run on a stripped-down Linux kernel. The distribution is Red Hat, but Urs Hoelzle - Google vice president of operations and vice president of engineering - said Google doesn't use much of the distro. Moreover, Google has created its own patches for things that haven't been fixed in the original kernel.
Google replicates the Web pages it caches by splitting them up into pieces it calls "shards." The shards are small enough that several can fit on one machine. And they're replicated on several machines, so that if one breaks, another can serve up the information. The master index is also split up among several servers, and that set also is replicated several times. The engineers call these "chunk servers."
As a search query comes into the system, it hits a Web server, then is split into chunks of service. One set of index servers contains the index; one set of machines contains one full index. To actually answer a query, Google has to use one complete set of servers. Since that set is replicated as a fail-safe, it also increases throughput, because if one set is busy, a new query can be routed to the next set, which drives down search time per box.
In parallel, clusters of document servers contain copies of Web pages that Google has cached. Hoelzle said that the refresh rate is from one to seven days, with an average of two days. That's mostly dependent on the needs of the Web publishers.
"When we have your top 10 results, they get sent to the document servers, which load the 10 result pages into memory," Hoelzle said. "Then you parse through them and find the best snippet that contains all the query words."
From Peeking Into Google, an article written last year at EclipseCon 2005, a conference on the open source.
An interesting video that talks about how Google search works is Google Factory Tour (the video is very long: 5 hr 39 min 41 sec).
No comments:
Post a Comment