then if you have to locate where can you find the program to initiate it, its located at Start > All programs > Microsoft Search Server > Microsoft Search Server Administration.
When you click on that you will notice that it launches the Search server administration page, also if you notice the Central administration page of WSS in my case I see a Shared services link which is again The Search server Administration Page
1> Search Administration
2> Search Settings
3> Search usage reports
4> Usage analysis
>> Getting back to our administration page you have lot of links and web parts that are configured to different sections of the page
>>The left pane which is like a Quick launch bar contains links to administration pages of Search Administration and Central Administration
>>Second section is Crawling which also does have some 8 links to help us make crawling easier and simple they are:
1. Content Sources: They help you determine the sites that are included in the crawl, create crawl schedules and also perform crawl settings, also you can initiate a full or incremental crawl . You can add new Content sources which will have an include Sharepoint site, websites, File share folders, Exchange public folders
2. Crawl Rules : By default there is no Rule, but you can add a rule for a path suppose http://teamsite and then you can choose to exclude items in url or say include complex url (e.g. url with ?) and specify a content access account , use a client certificate, specify credentials o use cookies for crawling.
3. Crawl Log: Shows the number of crawls, url included in crawl and successes, warnings and errors with failure message for troubleshooting
4. Default content access account : which shows the account that you use to crawl the contains you can also change the account but you would have to perform a full crawl to get results
5. Manage File Types : is used to mange the file types which can be crawled it includes 38 types by default and you can add some more
6. Reset Crawled Content : This feature will reset the crawled content indexes and search will not work until you run a full crawl again which will again index all the files to search database
7. Crawler Impact Rule: This will help you control the file sharing or requesting between number of users you can mention a url and set property as number of users that can simultaneously request and get the same document those can be 1, 2, 4, 8
8. Farm-Level Search Settings : here you can specify and change administrators email address to contact to incase of a problem, You can specify a proxy server and configure its settings, Timeout settings will be amount of time the server will wait when connecting to other services and whether to use or ignore SSL certificate warnings.
Then there is another section Queries and Results which includes:
- Authoritive Pages: Here you can promote and demote your links and results suppose you want site A to show results first as compared to site B for a keyword you can set site A as most authoritive page and Site B as least authoritive page so what ever you search you will get more results froms Site A.
- Federated Locations : I will talk about this later when we cover federation
- Metadata Property : Crawled properties are automatically extracted from crawled content. Users can perform queries over managed properties. Use this option to create and modify managed properties and map crawled properties to managed properties. Changes to properties will take effect after the next full crawl.
- Server Name Mapping : Specify the address at which the content will be crawled and the address that will be displayed in search results.
- Search result Removal: will remove the specified urls from occuring in the results of the search
Usage Reports :
- Queries Report : This part shows the reports for the queries that are searched by useres it also shows a pie diagram for the sites contibution in gentrating the request
- Results Report : Results report shows reports for quesries with zero results and best bets configured, with zero best bets
Webparts:
Active Crawls: This web part shows the number of calls running actively, duration it has been running and nature of crawl full or incremental.
Recently completed crawls: this web part shows the completed crawls, type, duration, successes and errors.
System status : It shows the system status which includes crawl status either running or Idle, Items in index, Server status which is normally the indexing server, propagation status, Default content access account, Contact e-mail, Proxy server none, Scopes update status, Scopes update schedule, Scopes needing update, Search alerts status, Query logging
Shortcuts: Which is actually links web parts used for navigation
I want to : This web part helps you provide links that can be like a FAQ to help people configure search clicking on the link take you to help search