The history of Internet search engines
What is the history of Internet search engines? Why not call it a search engine robot, but call it a spider?
As the name suggests, the Internet is also called the Internet. It collects any information that can be queried around the world. It is like a network. So when a robot searches for any point on such a network and crawls around to collect bits and pieces of information and data on these networks, Therefore, what is imaged must be a spider. This is where spiders come from.
When we give instructions to the spider about what content we want to search for, the spider will crawl to the path we want to search for, find the content we want to search for, and then collect matching websites like a list. We click on the page-by-page list to access each website that the spider helped me match. Therefore, the speed of this robot, which is a web spider, is as fast as the speed of light, and it can be listed in one second after clicking.
Search engines are composed of five basic elements: 1. Web crawling: When we give instructions to the search engine, the spider begins to crawl the entire network and crawl the relevant links in sequence. 2. Data analysis: After the web pages crawled by the robot spider are loaded, data analysis begins, and the search and filtering are carried out according to the included standards; then after the data analysis is completed, the spider begins to calculate, which is what we usually call algorithms, for the content and content of the website. Weight, external links, internal links, and keywords are analyzed to start the sorting process; 3. Information storage: Sorting is based on the analyzed content. 4. Cache processing: Caching is divided into temporary cache and period cache. Temporary cache mainly ranks the order of temporary web pages, while period cache generally changes once every 24 hours. 5. Display data: The data to be displayed is processed according to the above steps, including the order of ranking, the weight of the web page, etc.