With distributed indexing, organizations will no longer need to keep backups on storage area networks (SANs) for fault-tolerant operations, Mehta explained. "We have a distributed architecture, so the query tier determines where to fulfill the queries." You can make as many copies as you need," Mehta said. "The index data is replicated as it is streaming into Splunk. Users consulting Splunk can get their answers from any operational server, which increases the reliability of the service. When the downed server comes back online, it is then updated with the new information. If one server goes down, indexing will continue on the other server, or servers. The software will store multiple copies of its index, which it uses to answer user queries, across different servers. This is the first version of Splunk to use a new indexing technology that incorporates replication into its routine operations. The company has also pitched Splunk as a tool for business managers to collect and analyze operational intelligence. Administrators can use such data to troubleshoot problems and ensure smooth operations. The Splunk search engine was designed to collect and index data generated by machines, such as log files from servers and routers. Splunk Enterprise 5 can also generate reports more quickly than its predecessor, the company claims, and comes with new tools to link the software to third-party programs. "The data that is being collected in Splunk is becoming more mission critical," said Sanjay Mehta, Splunk vice president of product marketing, explaining the need for distributed indexing. The new version of the Splunk machine data search engine comes with a distributed indexing technology that could save storage costs for those customers running the software as a high-availability service.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |