Checklist for upgrading to a new version of SQL Server

Here is the basic process for a full version upgrade (such as 2000 to 2008) of the SQL Server engine. The steps will be different for RS or AS. I did not include all the details of variations such as replication, mirroring, log shipping, full-text search, or clustering. They are covered in the white paper in the first reference at the bottom.

 

Look for potential problems:

·         Run Upgrade Advisor from your new version of SQL Server and address any issues.

·         Run SQL Server 2008 Upgrade Assistant (I don’t have any details on this – it’s a 3rd party tool).

·         If upgrading to SQL Server 2008: Run the System Configuration Checker

·         Verify you are not using any discontinued features.

·         Read up on the “breaking changes” section of BOL.

·         Test not only the application but also all maintenance activities including disaster recovery.

·         Look for any hints/options or plan guides used in the system and test removing them.

·         On a development or test system – uncover and address issues caused by the upgrade and/or change in hardware

o   Install the new version and load copies of the databases.

o   Look for potential errors, changes in functionality, and performance changes.

o   Make sure you cover typical, peak, and periodic (such as month end) activity. Consider tracing high visibility activity as well, such as the functionality used by an executive or vocal user.

o   The more your test system looks like your production box (hardware, OS, configuration, SQL data) the more accurate your testing will be.

o   Do a profiler/readtrace replay: Capture at least one profiler trace using the replay template on production. Then use the replay functionality of either the RML tools or Profiler itself to replay the same traces on test. Consider that data modifications may fail depending on the state of the backups compared to the state of the database at the time the traces were taken.

o   Have testers hit the test system under load: typical and peak.

o   Put a full application load on the system and test all functionality. Tools like Profiler replay, Visual Studio Database Edition, RML, sample code such as what you might find on codeplex, and 3rd party products can all help with this process.

o   Consider using a tool such as Visual Studio Database Edition to add more test data to the system to simulate future usage.

o   If your IO setup will changed and the test system will be similar to production, consider running SQLIO (performance) and/or SQLIOSim and its new parser (correctness).

·         Test the upgrade process itself on a test system.

·         If doing an upgrade in place make sure the old system is at a minimum supported version level and edition for the upgrade (including the service pack). You cannot upgrade in-place from 32bit to 64bit. Check that your combination of old and new versions, editions, and platforms is supported.

 

Planning:

·         Start a document/checklist with all the steps you plan to follow, who is responsible for each, and the expected time for the process.

·         Will you change the OS, hardware, firmware, disk configuration, and SQL Server version all at the same time? If so, you can reduce your overall downtime but you also make it harder to isolate problems that occur after the changes.

·         Decide on a physical implementation vs. a virtual machine. If you will use a virtual machine make sure it is fully supported by Microsoft and it is setup per virtualization best practices.

·         If you are moving to a new box:

      • Follow best practices for configuration – if your memory, number of CPUs, or number of instances on the box/cluster changed some configuration options will need to be adjusted.
      • Run SQLIO (performance) and SQLIOSim (IO integrity) and verify the system is “fast enough”
    • Usually you will want the drives laid out the same, if not plan for the steps to change the location: 314546 HOW TO: Move Databases Between Computers That Are Running SQL Server http://support.microsoft.com/?id=314546  

·         Do you have special functionality such as replication, mirroring, or clustering? If so, consider whether that changes any of the steps.

·         Will your upgrade be in-place, side-by-side on the same server, or side-by-side on a different server?

·         Will the new instance have exactly the same server name and instance name (i.e. Server1 or VirtualServer1InstanceA)? If not, how will you accommodate components such as replication and mirroring that assume the server name never changes?

·         Note that all instance-aware components within a given instance are upgraded together. Common components such as SSIS and the client tools are upgraded the first time a newer version is installed on a box.

·         Have a rollback/contingency plan that includes SLAs, how you will know when you must implement the contingency plan, who will make the decisions, etc.

·         What version will you upgrade to? What service pack/CU level will you be at?

·         Will you upgrade/move all user databases at once?

·         Will you move system databases and/or information from the system databases? This could include logins/passwords, linked server definitions, configuration settings, jobs, DTS/SSIS packages, etc. Note that system databases can only be restored to the EXACT same version. User databases can be restored/attached to a newer version but not back down to a lower major version.

·         Will you do a detach/attach or a backup/restore? Attach/detach is the simplest but generally has a longer user downtime. With backup/restore you can do a full (and possibly some incremental transaction log or differential) backup/restore before-hand and only have downtime for the latest transaction log backup/restore time period. This only works if the database is in full or bulk-logged recovery mode (and for bulk-logged you need to be sure you aren’t doing any bulk operations before the downtime).

·         How long do you expect the upgrade to take including time for potential troubleshooting, breaks, and interruptions? Add more time to that for the scheduled downtime.

·         Make sure you have enough disk space for the entire upgrade process including extra copies of your backups.

 

Pre-downtime steps

·         If upgrading to SQL Server 2008, consider applying .NET 3.5 SP1 and the latest Windows installer ahead of time as they require a reboot. You may also want to install the SQL Native Client ahead of time.

·         If you have a version of Visual Studio 2008 that is not patched to at least SP1, patch it to at least SP1 prior to installing SQL Server.

·         For SQL Server 2008 if you are not planning to install SP1, install RTM based CU4 (to fix known and resolved setup issues), then apply RTM, then reapply CU4. Or you can slipstream SP1+SP1 based CUx (or whatever the latest SP is at the time) and do a single install that includes RTM, SP1, CUx. For clusters, do each step across all nodes before going to the next step.

·         Take a baseline of the system and note the circumstances (busy vs. slow day, certain time of day/week/month/year, anything unusual).

·         If moving to a new server, install drivers, supporting applications/utilities, etc. on the new system.

·         Make sure OS level settings such as whether or not “Lock Pages in Memory” is enabled for the SQL Server account are the same on the old and new hardware.

·         If necessary, install the latest backward compatibility files on the SQL Server box and any client boxes (which may include other servers).

·         If moving from DTS to SSIS, consider upgrading the packages ahead of time. If you do this make sure no changes are made to the existing production DTS packages unless you also apply the changes to the converted SSIS packages.

·         Install the new version of the client tools (including service packs), samples, and sample databases on DBA and developer desktops.

·         Become familiar with the new interfaces, functionality, and how your features work in the new version.

·         Understand the upgrade process and how to troubleshoot failures in each step.

·         Test security and connectivity, such as whether linked servers work as expected.

·         Consider pre-growing system databases if doing an in-place upgrade.

 

Just prior to the upgrade:

·         Backup everything (system dbs, user dbs, registry, sys db settings, config options, encryption keys, etc) and save off the backups.

·         Run DBCC CHECKDB on all system and user databases – address any errors that are found.

·         If you are scripting out data from the system databases make sure you have everything you need.

·         If you are keeping the same server name on new hardware, make sure the old box is shut down and the new server and SQL Server (sp_dropserver + sp_addserver with the “local” parameter) have been changed to the old server name BEFORE you enable replication or anything else which relies on the server/instance name.

·         If doing a backup/restore on a full or bulk-logged recovery mode database, take the full backup and restore it to the new instance. Optionally you may have one or more tran and/or differential backups to restore as well.

 

At Downtime:

·         Document the process, such as actual time to complete each step, any changes you had to make, etc.

·         Make sure all users are out of the system and do whatever is necessary to keep them out until the upgrade is complete.

·         Disable startup processes, SQL Server trace flags, services that access SQL Server, etc.

·         Make sure all replicated data has hardened on all systems and stop replication.

·         Stop any non-essential services and applications, especially any that might have SQL Server related files open. Examples of common problem areas are IIS, backup software, virus/malware scanners, and applications that connect to SQL Server.

·         For an upgrade in place, start the upgrade.

·         For backup/restore, take the final transactional log backup, copy it to the new location (possibly using a SAN clone) and restore it to the new instance.

·         For attach/detach, detach the user databases from the original server, copy them to the new location (or use a SAN clone), and attach them.

·         Take new backups (after upgrade, before any changes) and save them off (where they will not be overwritten until you are ready for them to be).

·         If the system databases were not restored, copy their data and then sync the logins with the users.

·         If the system databases were restored for a pure migration as opposed to an upgrade, tweak configuration options as needed (for example, MAXDOP or the number of TempDB files may change if the number of cores has changed).

·         If you have DTS/SSIS packages or other supporting files stored in the file system, copy them to the new system and disable access to them on the old system.

·         Update statistics (essential for 2000 to 2005/8, nice for 2005 to 2008) on all user databases.

·         Set the compatibility mode to the current version for each user database (by default an attach or restore leaves it at the level of the source instance).

·         Take new backups (after changes) and save them off (where they will not be overwritten until you are ready for them to be).

·         If using Full-Text update your catalogs.

·         Perform another DBCC CHECKDB on each database – optional. If possible use the WITH DATA_PURITY option (this option is implied for databases created in 2005 or later and for any database that has once had that option explicitly stated).

·         Perform DBCC UPDATEUSAGE on each database.

·         Take new backups (after changes) and save them off (where they will not be overwritten until you are ready for them to be).

·         If the instance name changed, point the users/application(s) to the new instance name.

·         Test basic maintenance operations.

·         Test whatever application level items you can do yourself (perhaps with a replay of only SELECT statements) or test with a few trusted users.

·         Let the users back in.

·         Test the application(s) – have all users do at least basic operations.

·         Compare the current performance to your baseline, taking into account the circumstances such as the number of users on the system.

·         Evaluate the criteria to see if you need to apply your rollback/contingency plan.

·         Make sure your new backup strategy is implemented and takes into account the backups you took during the upgrade process.

·         Restart any services, applications, etc. stopped just for the upgrade process.

 

Later:

·         If you enabled autogrow for system databases, consider disabling it.

·         Re-enable startup processes, SQL Server trace flags, services that access SQL Server, etc.

·         Restart any replication.

·         Update your upgrade checklist/documentation based on your most recent experience. Many of the changes may also apply to your disaster recovery docs.

·         Review your documentation, best practices, troubleshooting scripts, profiler templates, etc. for changes that should be made. For example, in 2005 and later the process to rebuild master and the other system databases uses a different utility than 2000 did.

·         Consider turning on new options available in your new version such as database checksum instead of just torn page detection.

·         Take advantage of the more granular security options in the new versions. This includes better job/replication/SSIS security as well as more role options.

·         With 2008 and later set up a Central Management Server so everyone in a group (such as all production DBAs who support one sector of the business) has the same list of SQL Servers available to them in SSMS.

·         Start using new features/functionality in the new version.

·         Keep up to date with service packs and cumulative updates.

·         Rewrite anything using deprecated features. You can use SQL Profiler to capture an event every time a deprecated feature from the SQL engine is used. Starting with SQL Server 2008 there is also a perfmon counter for this.

·         Capture a new performance baseline/benchmark on the system and update the baseline periodically.

·         For a side-by-side upgrade, determine when and how you will decommission the old instance.

·         Migrate maintenance plans created in 2000 to the new SSIS format.

 

References:

·         Ultimate guide to upgrading to SQL Server 2008 http://sqlcat.com/msdnmirror/archive/2008/12/07/ultimate-guide-to-upgrading-to-sql-server-2008.aspx

·         The application may have additional steps when you upgrade or move a database. For Sharepoint those steps are outlined here: http://mickey4nay.spaces.live.com/blog/cns!8983677073B13695!1484.entry

·         Upgrading a SQL Server 2008 Failover Cluster

 

If you are using replication, Analysis Services (AS), or DTS/SSIS, see these links:

Considerations for Upgrading Replicated Databases

Considerations for Upgrading Analysis Services
Considerations for Upgrading Integration Services
Considerations for Upgrading Data Transformation Services

How People Abuse Their SQL Server Transaction Log – Things NOT to Do

Care and Feeding of the Transaction Log

 

Want to learn more about how the transaction log works? Kalen Delaney, celebrated author of SQL Server 2008 Internals and the Inside SQL Server series as well as a world class trainer and performance tuner,  is giving a pre-conference talk at SQL PASS next week. The pre and post conference sessions are full day sessions you can purchase as an add-on to the regular conference (which runs Tuesday-Thursday). Kalen’s pre-conference session “Care and Feeding of the Transaction Log” is scheduled for Monday, November 2, 2009. She will be raffling off copies of her book and will have copies of her DVD to give away. If you are guilty of any of the transaction log practices below, or have colleagues who are and you need ammunition in your fight for change, Kalen’s session will be a great help to you! Note that she will not necessarily be covering all the items below, they are my own list and not hers. 🙂

How People Abuse Their SQL Server Transaction Log – Things NOT to Do

·         Rely on autogrow – this causes fragmentation, poorly allocated VLFs, a performance hit for duration of grow, etc.

·         Don’t take backups at all or do them on a schedule that doesn’t meet SLAs or doesn’t keep the log “small enough”.

·         Use simple recovery mode in the belief it will improve performance when there are no bulk inserts or something else that actually benefits from minimal logging.

·         Want to “turn off logging”.

·         “Shrink” the log by deleting the LDF file – can result in corruption and an unusable database.

·         Shrink the log file when it will just grow again.

·         Add multiple LDF files due to the mistaken impression that this will improve performance (multiple threads myth or thinking it will split IO over multiple files when logs are mostly sequential). This is particularly a problem with TempDB where we recommend 1/4 to 1 file per core and don’t always make it clear we mean only the data files and not the log file.

·         Only backing up the log files when they reach a certain percentage full to reduce the number of backups occurring at any one time – this can result in not being able to meet your recovery SLAs.

·         Enable instant file initialization and expect it to help with log growth.

·         Create one extremely large log file without regard to # of VLFs created.

·         Sharepoint documents have recommended simple recovery mode to “keep the log from filling” but don’t explain the tradeoffs for recoverability such as losing Point in Time recovery options.

·         Don’t understand the difference between truncating and shrinking the log.

·         Don’t realize that in simple mode you still have to do a full backup first.

·         Don’t realize they need to do a log backup after switching from simple or bulk logged to full.

·         Think DBCC CHECKDB checks for inconsistencies in the log

·         Put data and log on same drive (mix random and mostly sequential), put lots of logs on one drive (defeats purpose of having a log/sequential writes on own IO path), put output of profiler or other activity on same drive as a log

·         Don’t put log on fast enough/properly configured drive (RAID 10, disk partition alignment, separate from file server/other SQL/db data, disk allocation unit = 64k, HBA queue depth 64+, anything that gets writes to 3ms or less, etc.)

·         Ignore corruption messages/event log messages about bad IO

·         Don’t secure the directories where the MDF/LDF and backup files reside

·         Ignore the fact that Tran replication affects log size (can’t truncate until log reader has read data from log)

·         Do many transaction log backups between full backups (with few or no differentials) and don’t consider what happens if one of the files is corrupted/missing or how long it will take to do the restores

·         Don’t test the DR strategy, or at least have it documented and planned/thought out

·         Don’t exclude the MDF/LDF files from virus scanning software

 

Instead of abusing your transaction log, pay attention to Pond’s Twelfth Law: Don’t practice in front of the CIO.  A professional prepares ahead of time. To help you prepare, attend Kalen’s pre-conference session!

Considerations for installing SQL Server on VMWare

You still need to follow the standard SQL Server best practices, security, and preparation when you install on a VMware VM. Below are some additional considerations with regards to VMware.

 

Key points:

·         You must double check that your combination of VMWare, Windows, and SQL will be supported.

o   By Microsoft: http://windowsservercatalog.com/svvp.aspx?svvppage=svvpwizard.htm, http://support.microsoft.com/?id=956893

o   By VMware: http://pubs.vmware.com/vsp40/upgrade/wwhelp/wwhimpl/common/html/wwhelp.htm#href=c_database_scenarios.html&single=true

·         Just as with a physical system you must provide a well-configured IO subsystem. Follow standard SQL Server IO best practices. Use separate VMware VMFS partitions on independent disk arrays.

·         Again, just like with a physical machine, you must use hardware with sufficient resources.

·         Do not overcommit resources to the virtual machine (VM). For example, if the physical box has 4 cores and is running 4 VMs, do not assign 2 cores to each VM as this would overcommit the cores. The same applies to memory and disk.

·         VMWare recommends using ESX Server for production virtual machines if you want the best performance.

·         ” If SQL Server’s “lock pages in memory” parameter has been set, provide set the VM’s reservations to the amount of memory in the VM. This setting can adversely interfere with ESX Server’s balloon driver. Setting reservations will stop the balloon driver from inflating into the VM’s memory space.”

·         Consider configuring “large pages” for some types of workloads.

 

References from VMware:

·         Best Practices for SQL Server http://communities.vmware.com/docs/DOC-8964

·         Performance and Scalability of Microsoft SQL Server on VMware vSphere 4 http://www.vmware.com/files/pdf/perf_vsphere_sql_scalability.pdf

·         Database Scenarios http://pubs.vmware.com/vsp40/upgrade/wwhelp/wwhimpl/common/html/wwhelp.htm#href=c_database_scenarios.html&single=true

·         VirtualCenter Database Performance for Microsoft SQL Server 2005 http://www.vmware.com/resources/techresources/1050

·         SQL Server Workload Consolidation (VMware ESX 3.5) http://www.vmware.com/pdf/SQL_Server_consolidation.pdf

·         SQL Server Performance in a WMware Infrastructure 3 Environment http://www.vmware.com/files/pdf/SQLServerWorkloads.pdf

Checklist for installing SQL Server 2005 as a clustered instance

Checklist for installing SQL Server 2005 as a clustered instance

 

Windows/Hardware

1)      Verify the Windows cluster is set up per basic best practices and that basic failover works.

2)      Verify you have the latest patches, especially security patches, for Windows.

3)      For Windows 2008: Validate the configuration using “Validate a Configuration” in Failover Cluster Management.

4)      For Windows 2008: Make sure your quorum choice is appropriate for the number of nodes and other factors in your environment.

5)      Unless you have verified that your network cards support it, disable TCP Chimney and the other SNP settings.

6)      Verify none of the cluster nodes are domain controllers.

7)      Request a disk subsystem configured per IO and recoverability best practices that meets your minimum performance requirements (often you will require that Avg disk sec/read < 10-20ms and Avg disk sec/write < 3-5ms for a given load on the system).

8)      Have one or more “shared disks” that are not otherwise used by anything else (not even another instance of SQL) available to SQL Server. Create a new group and move the disk(s) from “Available Storage” (new in Windows 2008) to the group you created for this SQL Server instance. The disks should be configured per database best practices. Generally no other resources should be in this group.

 

Prep

9)      Download whatever service packs (SPs) and cumulative updates (CUs) you will be installing. Make sure the files you download are the proper architecture (x86 vs. x64).

10)   Download Visual Studio 2005 SP1.

11)   Find a new, (preferably) static IP address that is not currently used by anything. You will enter this during setup and the setup process will take care of adding it to DNS and other locations as appropriate.

12)   Find or create one or more domain groups that will be used by setup. The best practice is to use three unique groups for each instance of SQL Server, one each for SQL Server, SQL Agent, and Full Text. If the account you are using for setup does not have the permission to add the startup account(s) to the groups, add them manually ahead of time.

13)   Determine your SQL virtual name (cannot be used for a physical or virtual machine anywhere in the domain).

14)   Determine an instance name which will be unique in the cluster. Note that the name of a default instance is implicitly MSSQLSERVER so you can only have one default instance per cluster.

15)   Determine which domain user account(s) you will assign to run SQL Server, SQL Agent, and Full Text. The account(s) you choose will be added to the domain groups by the setup process if your setup account has permissions to do so. As a security best practice each service should have a unique account.

16)   Look through the items in my blog to find possible setup blockers http://blogs.msdn.com/cindygross/archive/2009/06/10/sql-server-2005-clustering-tips-references.aspx.

 

Install

17)   From the node which currently owns the group with the disks to be used by SQL Server, log in with an account that is a local admin on all nodes.

18)   Make sure no one is logged on through terminal services to any of the remote nodes.

19)   Make sure the Remote Registry service, Cryptography services, and Task Scheduler service are started on all nodes.

20)   Verify all available disks in the cluster are online, even those that SQL Server will not use.

21)   Stop non essential services that may slow down file copies (like virus scanners) or try to connect to SQL Server (like IIS or monitoring tools).

22)   Install the RTM version of SQL Server, for SQL Server 2005 you run setup once per instance. It will install the cluster aware components on all nodes. This includes SQL Server and Analysis Services if you choose them.

1.       Install the SQL SP.

2.       Install the SQL CU.

3.       Install the VS05 SP1.

23)   On every other node in the cluster, if you want the non-cluster aware components to be available, install those components on each node.

1.       RTM (for example, you may want to install the client tools and SSIS on the other node(s))

2.       SQL SP.

3.       SQL CU.

4.       VS05 SP1.

 

After

24)   Make SQL depend on any drives it will use for data and log files. Those drives must be in the SQL Server group.

25)   If you will be using DTC, you may want to cluster it.

26)   Update MSDTSSrvr.ini.xml on each node to point to the SQL Server virtual nameinstance. If you have multiple instances of SQL Server that will store SSIS packages you can add multiple instance names to the file.

27)   Consider setting “Max server memory” for each instance of SQL Server.

28)   Make sure the SQL Server service is set to “affect the group”.

29)   Follow your normal SQL Server best practices and standard configuration such as removing builtinadministrators and configuration maintenance operations.

 

References

·         The 3 Things you Need to Know to Install SQL 2005 on Windows 2008 Cluster http://blogs.msdn.com/psssql/archive/2009/04/08/the-3-things-you-need-to-know-to-install-sql-2005-on-windows-2008-cluster.aspx

·         List of known issues when you install SQL Server 2005 on Windows Server 2008 http://support.microsoft.com/default.aspx?scid=kb;EN-US;936302

·         SQL Server 2005 Failover Clustering White Paper http://www.microsoft.com/downloads/details.aspx?familyid=818234dc-a17b-4f09-b282-c6830fead499&displaylang=en

·         System Configuration Check (SCC) http://msdn.microsoft.com/en-us/library/ms143185(SQL.90).aspx

·         Hardware and Software Requirements for Installing SQL Server 2005  http://msdn.microsoft.com/en-us/library/ms143506(SQL.90).aspx

·         How to: Create a New SQL Server 2005 Failover Cluster (Setup) http://msdn.microsoft.com/en-us/library/ms179530(SQL.90).aspx

·         SQL Server 2005 Readme http://download.microsoft.com/download/5/0/e/50ec0a69-d69e-4962-b2c9-80bbad125641/ReadmeSQL2005.htm

·         Changes to the readme file for SQL Server 2005 http://support.microsoft.com/default.aspx?scid=kb;EN-US;907284

·         (my blog) SQL Server 2005 Clustering Tips/References http://blogs.msdn.com/cindygross/archive/2009/06/10/sql-server-2005-clustering-tips-references.aspx

·         (my blog) How to configure DTC for SQL Server in a Windows 2008 cluster http://blogs.msdn.com/cindygross/archive/2009/02/22/how-to-configure-dtc-for-sql-server-in-a-windows-2008-cluster.aspx

SQL Server and TCP Chimney

If you are using SQL Server or Analysis Services: I suggest you double check that your SNP settings, especially TCP Chimney Offset, are all OFF unless your NIC vendor has verified they support it and you have installed their version of drivers that support it. Windows 2003 SP2 turned it on by default, you can disable it with a hotfix (which updates three registry key values) or manually set the registry key values yourself. If the NIC vendor does support the settings they can improve your network performance, but when they don’t support it you can see odd connectivity problems.

 

My suggestion for a standard:

 

SNP/TCP Chimney settings will be disabled to avoid known problems with SQL Server and other products.
REASON: Performance and usability. When TCP Chimney is enabled it will often result in failed connectivity to SQL Server and/or dropped packets and connections that affect SQL server. See  948496 An update to turn off default SNP features is available for Windows Server 2003-based and Small Business Server 2003-based computers
http://support.microsoft.com/default.aspx?scid=kb;EN-US;948496 and 942861 Error message when an application connects to SQL Server on a server that is running Windows Server 2003: “General Network error,” “Communication link failure,” or “A transport-level error” http://support.microsoft.com/default.aspx?scid=kb;EN-US;942861

 

948496 An update to turn off default SNP features is available for Windows Server 2003-based and Small Business Server 2003-based computers http://support.microsoft.com/default.aspx?scid=kb;EN-US;948496

 

Some of the known SNP/Chimney issues:

·         951037  Information about the TCP Chimney Offload, Receive Side Scaling, and Network Direct Memory Access features in Windows Server 2008 http://support.microsoft.com/default.aspx?scid=kb;EN-US;951037

·         942861  Error message when an application connects to SQL Server on a server that is running Windows Server 2003: “General Network error,” “Communication link failure,” or “A transport-level error” http://support.microsoft.com/default.aspx?scid=kb;EN-US;942861

·         945977  Some problems occur after installing Windows Server 2003 SP2 http://support.microsoft.com/default.aspx?scid=kb;EN-US;945977

·         947775 On a Windows Server 2003 computer that has a TCP Chimney Offload network adapter, TCP data stream may be corrupted when the network adapter indicates a MDL chain whose starting MDL has a nonzero offset http://support.microsoft.com/default.aspx?scid=kb;EN-US;947775

·         936594 You may experience network-related problems after you install Windows Server 2003 SP2 or the Scalable Networking Pack on a Windows Server 2003-based computer http://support.microsoft.com/default.aspx?scid=kb;EN-US;936594

·         947773 A Windows Server 2003-based computer responds slowly to RDP connections or to SMB connections that are made from a Windows Vista-based computer http://support.microsoft.com/default.aspx?scid=kb;EN-US;947773

·         946056 A Windows Server 2003-based computer responds slowly to RDP connections or to SMB connections that are made from a Windows Vista-based computer http://support.microsoft.com/default.aspx?scid=kb;EN-US;946056

·         940202 A Windows Server 2003-based computer may stop responding during shutdown after you install the Scalable Networking Pack http://support.microsoft.com/default.aspx?scid=kb;EN-US;940202

·         924325 Network applications that use the NetBT keep-alive transmissions may not work correctly after you install the Windows Server 2003 Scalable Networking Pack on a Windows Server 2003-based computer http://support.microsoft.com/default.aspx?scid=kb;EN-US;924325

·         945466 Stop error occurs when a computer that is TCP Offload Engine (TOE) enabled is running under stress with a “low resource simulation” mode http://support.microsoft.com/default.aspx?scid=kb;EN-US;945466

927168 TCP traffic stops after you enable both receive-side scaling and Internet Connection Sharing in Windows Vista or in Windows Server 2003 with Service Pack 1 or Service Pack 2 http://support.microsoft.com/default.aspx?scid=kb;EN-US;927168

How to automate Update Statistics

For SQL Server 2005 here are some options to update statistics with the default settings that samples the data instead of reading every row:

 

1) If you are also defragmenting your database with REBUILD and/or REORGANIZE you will want to integrate your Update Statistics into that schedule. When you do a REBUILD you are essential getting the equivalent of an UPDATE STATISTICS WITH FULLSCAN so you do not want to turn around do a sampling of statistics or another FULLSCAN too soon after the REBUILD.

 

Paul Randal talks about the interaction of the two: http://www.sqlskills.com/BLOGS/PAUL/post/Search-Engine-QA-10-Rebuilding-Indexes-and-Updating-Statistics.aspx  

 

A great script that lets you schedule your defragmentation and statistics maintenance all at once:  SQL Server 2005 and 2008 – Backup, Integrity Check and Index Optimization

http://ola.hallengren.com/

If you choose to run this script you will need three components created in your master database.

CommandExecute and DatabaseSelect and IndexOptimize

 

2) Here is how to loop through all database and do an UPDATE STATISTICS on all objects (within the restrictions of the sp_updatestats procedure). It uses the undocumented and unsupported (but widely used) sys.sp_MSforeachdb. The query also captures how long it takes per database:

 

 

exec master.sys.sp_MSforeachdb ‘ USE [?];

 

DECLARE @starttime datetime, @endtime datetime

SELECT @starttime = GETDATE()

SELECT db_name() as CurrentDB, @starttime as DBStartTime

EXEC sp_updatestats

SELECT @endtime = GETDATE()

SELECT @starttime as StartTime, @endtime as EndTime, DATEDIFF(MINUTE,@starttime,@endtime) as TotalMinutes

 

3) If you want more control over which statistics are updated, you can specify an individual object or index:

UPDATE STATISTICS [table_name]
or

UPDATE STATISTICS [table_name] [index_name/statistics_name]

 

4) Create an SSIS package that uses the “UPDATE STATISTICS Task” as part of a maintenance plan. You can create it manually or with the maintenance plan wizard. Update Statistics Task (Maintenance Plan) http://msdn.microsoft.com/en-us/library/ms178678.aspx

 

Considerations:

·         In most cases you will want auto update and auto create statistics ON and you will NOT want to use the RECOMPUTE option as that turns off the automatic updating. This (having auto stats on) works best when you have a regularly scheduled UPDATE STATISTICS (on a schedule and/or after large data modifications/batches) to reduce the chance that the auto update has to kick in during a busy time.

·         In many cases you will also want AUTO_UPDATE_STATISTICS_ASYNC on so that auto stats don’t make the query that pushed the stats over the edge to wait for updated statistics. The downside of this is that the query that ran right after the statistics reach the “stale” threshold will use the old statistics. Whether it is worth it for any given query to wait depends on how long it takes to update the statistics, whether the query plan changes afterwards, and whether a different query plan causes a significant difference in total response time.

·         Note from BOL: For databases with a compatibility level below 90, executing sp_updatestats resets the automatic UPDATE STATISTICS setting for all indexes and statistics on every table in the current database. For more information, see sp_autostats (Transact-SQL). For databases with a compatibility level of 90 or higher, sp_updatestats preserves the automatic UPDATE STATISTICS setting for any particular index or statistics.

 

References:

·         Statistics Used by the Query Optimizer in Microsoft SQL Server 2005 http://technet.microsoft.com/en-us/library/cc966419.aspx

·         Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 http://msdn.microsoft.com/en-us/library/dd535534.aspx

SQL Server with NetApp SAN

If you are planning to use  NetApp as the SAN for your SQL Server instance(s), take a look at these documents in addition to the normal SQL Server IO planning best practices documents.

TR-3779 Sizing best practice guide.
http://media.netapp.com/documents/tr-3779.pdf

TR-3696 This is for the storage layout best practices.
http://www.netapp.com/us/library/technical-reports/tr-3696.html

White Paper on 1 TB DSS systems
http://www.netapp.com/us/library/technical-reports/tr-3650.html

SMSQL 5.0 Best Practice Guide
http://media.netapp.com/documents/tr-3431.pdf

Microsoft® SQL Server 2005 Performance and Scalability Testing Using NetApp FAS920 Storage Systems
http://media.netapp.com/documents/tr-3402.pdf

SQL Server Security Granularity

I have had some questions recently about how to grant developers certain permissions without giving them sysadmin rights. Hopefully this summary will help you determine how to grant the least possible privileges. The summary is based on SQL Server 2005 but will also apply to SQL Server 2008.

·         I would hesitate to grant any more permissions in development than they get in production. This means avoid not only sysadmin but also db_owner where possible.

o   This avoids problems where they spend a long time developing something only to find out at the last minute that it won’t work with production permissions.

o   If there is any production data on the development system it may be more vulnerable to attack when more people have elevated permissions.

o   As an alternative you may want to create an application that lets them submit requests to do things that require elevated permissions. It can log on to SQL Server with the appropriate permissions and perform whatever action they need. It can optionally log this activity, create a change request ticket, email the DBAs, or whatever you like. This should reduce the chance that some elevated permission need makes it into the application because it is much more obvious when they are performing an activity that they or the application will not be able to do in production.

·         Generally you will not want to grant CREATE DATABASE permissions to non-DBAs. Creating databases involves OS level permissions and space management, performance considerations, best practice implementations, backups, maintenance, etc. Also, the creator of a database can make themselves a db_owner which is usually more than a developer needs. If you do decide to grant permissions to create databases, the permission is GRANT CREATE DATABASE TO … and/or GRANT ALTER ANY DATABASE TO ….

·         The KILL command to kill an existing SPID requires either PROCESSADMIN or SYSADMIN role membership. The PROCESSADMIN role includes both ALTER ANY CONNECTION and ALTER SERVER STATE and the combination of the two are required to use the KILL command.

·         To run SHOWPLAN or use the GUI actual/estimated execution plans to see execution plans, you can GRANT SHOWPLAN TO… in the database(s) that contain the objects referenced in the queries. They also need permission to execute the query itself. There is no need to grant anything more than the ability to execute the query if you just want to SET STATISTICS TIME or SET STATISTICS IO. The danger in granting this permission is that the plan could theoretically contain information about data or the schema that would help a hacker.

·         To run SQL Profiler you can GRANT ALTER TRACE TO…. The danger is that the user can see information about the schema and sometimes the data that could be used to hack into the system.

·         To use Job Activity Monitor, first add the login or group as a user in the MSDB database. Then add them to the operator role:

sp_addrolemember ‘SQLAgentReaderRole’, ‘test1’

·         Using the Activity Monitor requires VIEW SERVER STATE and SELECT on sysprocesses and syslocks. The SELECT on sysprocesses and syslocks is granted by default to PUBLIC and therefore everyone, but VIEW SERVER STATE has to be explicitly granted.

·         To run the Database Tuning Advisor (DTA) you need SHOWPLAN permissions and the ability to execute the queries in all the databases in the workload. However, if the trace file used as input includes the LoginName data column DTA will try to impersonate the users and therefore permission needs to be granted to each user OR you can avoid collecting the LoginName data column. Right after a new instance of SQL Server is installed, a sysadmin must run DTA once before anyone else can use it initialize some settings.

·         To create objects, you have a couple of choices. You can GRANT CREATE TABLE, GRANT CREATE PROCEDURE, etc. in each database. Alternatively you can add them to the db_ddladmin role in the appropriate databases. This will grant them VIEW ANY DATABASE and the database level permissions ALTER ANY ASSEMBLY, ALTER ANY ASYMMETRIC KEY, ALTER ANY CERTIFICATE, ALTER ANY CONTRACT, ALTER ANY DATABASE DDL TRIGGER, ALTER ANY DATABASE EVENT, NOTIFICATION, ALTER ANY DATASPACE, ALTER ANY FULLTEXT CATALOG, ALTER ANY MESSAGE TYPE, ALTER ANY REMOTE SERVICE BINDING, ALTER ANY ROUTE, ALTER ANY SCHEMA, ALTER ANY SERVICE, ALTER ANY SYMMETRIC KEY, CHECKPOINT, CREATE AGGREGATE, CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE QUEUE, CREATE RULE, CREATE SYNONYM, CREATE TABLE, CREATE TYPE, CREATE VIEW, CREATE XML SCHEMA COLLECTION, REFERENCES

 

This query will show you the list of available privileges:
select * from sys.fn_builtin_permissions (DEFAULT)

x64 Windows – Upgrade from 32bit SQL Server to 64bit SQL Server

Many people are now upgrading from 32bit to 64bit SQL Servers. Most of you have a match between your operating system and your SQL Server platform. For example, most of you install a 32bit SQL Server on 32bit Windows, and if you have the x64 platform of Windows, you usually install the x64 SQL Server. But what happens when you have a 32bit SQL Server on an x64 system and you want to change it to be al x64? Note that you cannot install 32bit SQL Server on IA64 so this scenario does not apply to Itanium systems. In the example below both the platform and the version of SQL Server are changing.

You have an instance of SQL Server 2000 32bit installed on Windows 2003 SP2 x64. This means SQL Server is “running in the WOW”. WOW stands for Windows on Windows and means you have a 32bit application running inside a 64bit OS. This gives SQL Server a full 4GB of user addressable virtual memory space, which is more than any 32bit application can get on a 32bit OS without memory mapping (in SQL we do memory mapping of the buffer pool through “AWE”). However running in the WOW doesn’t give you the full memory advantages you would get from running a true x64 application on an x64 OS. SQL Server 2000 was not released in an x64 “flavor”, but once you upgrade to SQL Server 2000 SP4 Microsoft will support running it in the WOW. SP4 was required for this particular configuration even before we discontinued support for SP3. See 898042 Changes to SQL Server 2000 Service Pack 4 operating system support http://support.microsoft.com/default.aspx?scid=kb;EN-US;898042 Generally you should avoid installing 32bit applications on x64 systems whenever possible. Any recently purchased hardware will be x64 and putting a 32bit OS on it will throttle back its memory capabilities, so your best bet is going to be an x64 version of SQL Server on x64 Windows.

 

You want to upgrade this instance from SQL Server 2000 32bit to SQL Server 2005 x64 on the same box. You would like to keep the same instance name. However, we do not support an in-place upgrade from any 32bit SQL Server to any 64bit SQL Server. Additionally, you cannot restore system databases (master, model, tempdb, msdb) to a different version, not even a different service pack or hotfix level.

·         Version and Edition Upgrades “Upgrading a 32-bit instance of SQL Server 2000 from the 32-bit subsystem (WOW64) of a 64-bit server to SQL Server 2005 (64-bit) on the X64 platform is not supported. However, you can upgrade a 32-bit instance of SQL Server to SQL Server 2005 on the WOW64 of a 64-bit server as noted in the table above. You can also backup or detach databases from a 32-bit instance of SQL Server 2000, and restore or attach them to an instance of SQL Server 2005 (64-bit) if the databases are not published in replication. In this case, you must also recreate any logins and other user objects in master, msdb, and model system databases.”

·         You cannot restore system database backups to a different build of SQL Server “You cannot restore a backup of a system database (master, model, or msdb) on a server build that is different from the build on which the backup was originally performed.”

·         If the SQL Server versions are the same, even system databases can be restored between different platforms (x86/x64). However, you do sometimes have to make one update to the msdb database when you do this (because often the SQL Server install path has changed, such as using “program files (x86)” on an x64 system). For non-system databases the version you restore to doesn’t have to be identical, generally you can restore a user database to a higher version and the platform (x86/x64) is irrelevant. Error message when you restore or attach an msdb database or when you change the syssubsystems table in SQL Server 2005: “Subsystem % could not be loaded”

 

So in this case you have two basic options if you must keep the same server and instance name:

1.       Upgrade, reinstall, attach

a.       Make sure all users, applications, and services are totally off the system for the entire duration of the downtime

b.      Upgrade SQL 2000 SP4 32bit to SQL 2005 (or 2008) 32bit (NOT x64! – that is not a viable upgrade path)

c.       Backup all databases

d.      Detach the user databases (the detach does a checkpoint to ensure consistency)

e.      Make copies of the mdf/ldf files for user and system dbs

f.        Uninstall SQL Server 2005 32bit (to make the instance name available)

g.       Install SQL Server 2005 x64 to the same instance name and at the EXACT same version as what was just uninstalled

h.      Restore master, model, msdb

i.         Attach the user databases

j.        If needed, run the update from Error message when you restore or attach an msdb database or when you change the syssubsystems table in SQL Server 2005: “Subsystem % could not be loaded”

k.       Apply the appropriate Service Pack and/or Cumulative Update

l.         Take full backups

m.    Allow users back in the system

2.       Reinstall, attach, copy system db info

a.       Make sure all users, applications, and services are totally off the system for the entire duration of the downtime

b.      Backup all databases

c.       Extract all relevant information to allow re-creation of system database information. This includes logins/passwords, configuration settings, replication settings, linked servers (including login mappings), custom error messages, extended stored procedures, MSDB jobs, DTS/SSIS packages stored in MSDB, proxies, any objects manually created in any system database. If you go this route let me know and I’ll double check that this list is complete.

d.      Detach the user databases (the detach does a checkpoint to ensure consistency)

e.      Make copies of the mdf/ldf files for user and system dbs

f.        Uninstall SQL Server 2000 32bit (to make the instance name available)

g.       Install SQL Server 2005 x64 to the same instance name.

h.      Attach the user databases

i.         Apply all the system information you extracted above including sync’ing users to the new logins.

j.        If needed, run the update from Error message when you restore or attach an msdb database or when you change the syssubsystems table in SQL Server 2005: “Subsystem % could not be loaded”

k.       Apply the appropriate Service Pack and/or Cumulative Update

l.         Take full backups

m.    Allow users back in the system

SQL Server Consolidation

Many companies are now looking to consolidate their SQL Server instances. The old strategy of one instance per server can be wasteful of resources. Often large chunks of physical resources sit idle and a lot of money is spent on electricity to power all those machines. If you can consolidate even some of your instances onto fewer machines, you can save both physical and human resources. Note that any one of the below strategies will work on either a standalone or clustered system.

 

Some of the pros/cons of each common type of consolidation strategy:

 

Hyper-V virtualization

·         Great when you have to use a version of the OS, drivers, or applications that aren’t used anywhere else or that don’t ‘play well with others’.

·         Can be more flexible and easier to move to another host machine than a physical instance.

·         Has great performance for SQL Server if configured per best practices, you can get the same throughput at a slight cost in increased CPU usage.

·         Network intensive applications may have a higher network and CPU cost on a VM.

·         For now, any Hyper-V virtual machine (VM) is limited to only 4 CPUs assigned per VM (for Windows 2008, 2 CPUs for Windows 2003 guest OS).

·         For now, any Hyper-V virtual machine is limited to 64GB of RAM per VM.

·         Requires x64 chips with Intel VT or VMD virtual with DEP enabled

·         Allows total isolation of the entire environment.

 

Multiple instances

·         Very good for isolating security (assuming each SQL Server service starts with a different account).

·         Allows each instance to be managed and configured to meet the needs of a single group of users. This often makes downtime for SQL Server patches easier to arrange.

·         There is some additional overhead, mainly memory, required compared to a single instance because each instance has some allocations that occur at startup regardless of actual usage. However, this is usually low, especially on today’s high RAM systems.

·         Allows multiple versions of SQL Server to be installed at once with each application on its preferred version. The different version could mean 2000 vs. 2005 vs. 2008, or it could mean different service pack and hotfix levels.

 

Single instance with multiple databases/applications

·         This method can be very cost effective and is often easy to manage as long as the applications/databases don’t have performance problems or cause conflicts with one another.

·         If a SQL Server patch has to be applied for one database, all the databases share the downtime. The administration is easier because the patch only has to be applied once, but it can be more difficult to arrange downtime that is acceptable to all users.

·         This method can cause security problems. If any application has permissions, such as sysadmin role membership, which extends outside of its own database it can affect other databases/applications on the server.

o   One example of a problem is an application with sysadmin permissions that changes configuration settings, TempDB settings, or other things that affect other databases/applications either directly or indirectly.

o   Another example is that if an application doesn’t protect against SQL injection, it can allow a hacker into the database. If that database is on an instance with other databases and the id used by the hacked application has elevated permissions such as sysadmin then the hacker now has access to all the other databases.

o   This could include accidental or intentional changes by internal employees or contractors, so the danger is not limited to databases accessible through the internet.

·         Applications share TempDB which can sometimes be a bottleneck depending on the way the applications access the databases. Often this can be managed with proper TempDB sizing and number of data files.

·         Depending on how you license your software, this could possibly save you some money.

 

I hope you find this information helpful, and I have included some references below with more details on your options.

Green IT in Practice: SQL Server Consolidation in Microsoft IT

http://msdn.microsoft.com/en-us/architecture/dd393309.aspx

Running SQL Server 2008 in a Hyper-V Environment – Best Practices and Performance Recommendations

http://sqlcat.com/whitepapers/archive/2008/10/03/running-sql-server-2008-in-a-hyper-v-environment-best-practices-and-performance-recommendations.aspx

Planning for Consolidation with Microsoft SQL Server 2000

http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/SQL2KCon.mspx

SQL Server Consolidation on the 64-Bit Platform

http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/64bitconsolidation.mspx

 

%d bloggers like this: