Sunday, 28 March 2010

How to Export SQL Database from SQL 2000 SP4 to SQL 2008 SP1 x64

Prerequisite:

Require permission to the SQL2000 server (adding your SQL 2008 service account to the local admins on the SQL 2000 server would normally suffice as buy default, local admins have admin access.)

Everything that follows is done on the SQL 2008 box

1: Start → All Programs → Microsoft SQL Server 2008 → Import and Export Data (64-bit)

2: Welcome to SQL Server Import and Export Wizard → Next

3: Choose a Data Source →

Data Source → SQL Server Native Client 10.0
Server name → << Name of your remote SQL 2000 server and instance name >>
Database → << Name of the database to be imported >>
→ Next

4: Choose a Destination →

Destination → SQL Server Native Client 10.0
Server name → (local)
Data → << Use existing name or create new name for imported database >>
→ Next


5: Specify Table Copy or Query →

'Copy data from one or more tables or views
→ Next

6: Select Source Tables and Views →

Select all 'Tables and views'
→ Next


7: Save and Run Package →

Check 'Run immediately'
→ Next

8: Complete the Wizard → Finish

9: Performing Operation →

Wait for it to complete, click 'Close,' and it's done.



Author - V. Cosonok

Friday, 26 March 2010

A Brief Guide to Using IOmeter & Sample Results

1) Download iometer from the net and run the setup.exe


2) All Programs -> Iometer -> Iometer

3a) Under ‘Topology’ tab, select the local machine and under ‘Disk Targets’ tab select the drive you want to run the IO test on:


<< The red line through the drive just means that iometer needs to prepare the drive before its runs – no problem >>

3b) Set Maximum Disk Size to 204800 Sectors (one sector is 512 B, so 204800 sectors gives 100MB)


4) Under ‘Access Specifications’ tab, under Global Access Specifications, select Default, and Click Add


<< Default test is – 67% read, 33% write, 2 KB, 100% Random non/sequential writes, Burst Length 1 I/O. This is fine for this brief guide and the sample results below use this. >>

5a) Under ‘Results Display’ tab, under Update Frequency, set to 10 seconds


5b) Under ‘Test Setup’ tab, set Run Time to 1 minute


6) The click the green flag to start the test, and choose a location to save the results….



Sample Results (over 1 minute)

1) 7.2K SATA Hard Drive (Test run from XP Workstation)


73.49 Total I/Os per Second
0.14 Total MBs per Second
13.60 Average I/O Response Time (ms)
95.39 Maximum I/O Response Time (ms)

2) RAID 5 of 6 x 10k SCSI Ultra 320 disks (Test run from Windows XP Machine on vSphere with Openfiler iSCSI storage)

1727.29 Total I/Os per Second
3.37 Total MBs per Second
0.58 Average I/O Response Time (ms)
128.61 Maximum I/O Response Time (ms)

3) RAID 5 of 6 x 15k SCSI Ultra 320 disks (Test run from Windows XP Machine on vSphere with Openfiler iSCSI storage)

2235.17 Total I/Os per Second
4.37 Total MBs per Second
0.45 Average I/O Response Time (ms)
13.92 Maximum I/O Response Time (ms)

4*) 2-way replicated stripe across 3 LeftHand NSMs each with 12 15K SAS Dual Port hard drives in 2 RAID 5s of 6 disks (Test run from Windows XP Machine on vSphere with LeftHand iSCSI storage)

1514.57 Total I/Os per Second
5.92 Total MBs per Second
0.66 Average I/O Response Time (ms)
459.05 Maximum I/O Response Time (ms)

*1,2,3 used default test – 67% read, 33% write, 2 KB, 100% Random non/sequential writes, Burst Length 1 I/O. 4 used 100% read, 0% write, 4KB, 0% Random. I kept getting surprisingly poor results for LeftHand in comparision to Openfiler using the default test – this must be put in context though as the LeftHand has many VMs running on it, uses thin-provisioned LUNs and 2-way replicated volumes; whereas the Openfiler is relatively un-utilized, has thick-provisioned LUNs, and no storage node redundancy.

 
Author V. Cosonok


Wednesday, 24 March 2010

Openfiler v2.3 Fixes for vSphere

The once trusty Openfiler storage modules had been playing up recently since the VMware infrastructure was upgraded from VI3 to vSphere; here are some fixes which seem to have done the trick in fixing the problems with LUNs locking up under high I/O.

1: Properties of the iSCSI Target (Volumes -> Target Configuration)

Make these changes to the default settings:


2: Unmap LUN and re-map (Volumes -> LUN Mapping) as write-back and fileio

3: Connect via SSH with root login and run this command
conary updateall

Note: the Openfiler box needs internet access and if need to go through a proxy then edit the file conaryrc in /etc, and put this line at the top with the relevant details:

proxy http://user@domain.priv:password@webproxy:port


Using Openfiler with Microsoft iSCSI Initiator:
If you are experiencing slow performance from a Windows Server connected to Openfiler using the Microsoft iSCSI initiator, the changes to the default iSCSI target settings outlined above will help. 
For example:
We were using Openfiler as a temporary iSCSI storage device for the staging area  for a large Exchange public folder restore. With the default settings - the performance writing to the Openfiler storage was very poor at 200 MB/min. With the modified settings -
ImmediateData = yes
MaxRecvDataSegmentLength = 262144
MaxXmitDataSegmentLength = 262144
R/W Mode = write-back
Transfer Mode = fileio
- we were  getting a transfer rate of 2 GB/min across the 1 Gbps network.