Page 163 - DCAP508_DATABASE_ADMINISTRATION
P. 163
Unit 11: Monitoring SQL Server for Performance
those deadlocks, you can create a file defining those events, event data, and event Notes
filters; save the definition; and reapply the filter the next time you want to monitor
deadlocks. SQL Profiler uses trace definition files for this purpose.
v. Analyze captured event data. In order to be analyzed, the captured, saved event data is
loaded into the application that captured the data. For example, a captured trace
from SQL Profiler can be reloaded into SQL Profiler for viewing and analysis.
Analyzing event data involves determining what is happening and why. This
information allows you to make changes that can improve performance, such as
adding more memory, changing indexes, correcting coding problems with Transact-
SQL statements or stored procedures, and so on, depending on the type of analysis
performed. For example, you can use the Index Tuning Wizard to analyze a captured
trace from SQL Profiler automatically and make index recommendations based on
the results.
vi. Replay captured event data. Only available in SQL Profiler, event replay allows you to
establish a test copy of the database environment from which the data was captured
and repeat the captured events as they occurred originally on the real system. You
can replay them at the same speed as they originally occurred, as fast as possible (to
stress the system), or more likely, one step at a time (to analyze the system after each
event has occurred). By analyzing the exact events in a test environment, you can
prevent detrimental effects on the production system.
Self Assessment
State true or false:
1. SQL Profiler allows event data to be saved to a SQL event.
2. Event replay option is only available in SQL Profiler only.
3. Limiting the event data allows the system to focus on the events pertinent to the monitoring
scenario.
4. Saving captured event data is vital when creating a performance baseline.
5. The events determine which activities are monitored and captured.
11.3 Evaluating Performance
Optimal performance comes from minimal response times and maximum throughput as a
result of efficient network traffic, disk I/O, and CPU time. This goal is achieved by analyzing
thoroughly the application requirements, understanding the logical and physical structure of
the data, and assessing and negotiating tradeoffs between conflicting uses of the database, such
as online transaction processing (OLTP) versus decision support.
Response Time vs. Throughput
Response time is measured as the length of time required for the first row of the result set to be
returned to the user in the form of visual confirmation that a query is being processed.
Throughput is a measure of the total number of queries handled by the server during a given
time.
As the number of users increases, so does the competition for a server’s resources, which in turn
causes response time to increase and overall throughput to decrease.
LOVELY PROFESSIONAL UNIVERSITY 157