Frequently Asked Questions
Solaris (page 3)
Q12: Can scripts be shared between Xmfg versions for Windows, Solaris, and
A12: Yes, the script engine is virtually identical between
all of the platforms. Minor discrepancies in implementation may exist. One
of the few instances where you will have to be careful would be absolute
path names. On Windows it would be typical to save a file to C:\\temp for
example... the same script run on Linux would need to reference it as
/tmp. Referring to relative pathways (./subdir/filename.txt) should always
Q13: What devices are used in the bus probe?
A13: The Extreme SCSI tools
work with hard drives, tape drives, and cd roms under Solaris. We use the
contents of /dev/rdsk/*t*s2 and /dev/rmt/*n
All platforms can now also
include "file devices" in the device list by adding a line similar to the
following in the ./config/devices.ini file:
File 00:98:00 /XDEVssol/posix.txt
which means to map the binary file
specified by the path to device 00:98:00 for program operation. You will not
be able to issue SCSI commands to a file device but they can be opened for
Xperf and Xmfg File IO (which uses posix
Q14: We have SCSI devices attached to our host adapter, but they don't show
up when we start the software?
A14: First off, no devices can be
accessed if the low level device is currently opened by another program. As
an example, Cd-roms under Solaris are typically under control of vold. To
see your cdrom drive, you need to kill vold.
In addition to being free,
SCSI hard drives also need to be formatted with the disk label under
Solaris. (man format)
Your driver might not be functional, or the special device files (/dev/rdsk/*t*s2)
might not exist for the devices that you're attempting to connect.
may notice that the bus probe in the shell window shows the devices as being
'discarded (incorrect SCSI ver).' This means that the IDE device (scsiver 0)
is not being added because it can't be effectively used by the program. IDE
devices should use the IDE specific version of the software. In the future
we plan to add more thorough IDE implimentation.
If the bus probe shows
simply 'discarded' that means that the config/devices.ini file has marked
those drives as not to be operated on. When first run, we attempt to
determine which drives are in use system devices by examining /etc/mnttab.
If one of the devices is listed there we discard it automatically. If you
are SURE that you want to see this device and/or issue SCSI commands to it,
open up the Device Management window and remove those Discard entries from
the list and do 'Save and Use'.
Viewing the appropriate Debug.txt file may offer insights into why the
devices are not shown.
Q15: The scsi cdb issued over the SCSI bus isn't what I specified!
A15: Recent versions of the software use the cdb_opaque method, and should
theoretically give us full access to all of the bits up to a 12 byte cdb.
Solaris (8) doesn't seem to currently support 16+ byte cdbs via the uscsi
low level interface.
Q16: For Extreme Performance the CPU utilization is nearly 100% and WIO (Waiting
on IO) is 0?
A16: You are probably running Performance in FileIO
mode. When using Raw SCSI (uscsi) the ioctls block until the
operation is complete. WIO measures the time spent while blocked. Solaris
doesn't provide a WIO statistic in its version of 'iostat' so it is left
In FileIO mode we use polled Posix Asyncronous IO, which means
that the read or write operation is issued, and then we check the status of
the IO and reissue when it's complete. If more than 1 tag is specified, then
multiple read or write operations can be going at once.
Since it is
an active polling loop, and we're interested in achieving the highest
performance, the loop never sleeps and runs as quickly as possible to
minimize the amount of time between IO completion and reissuing for that
Q17: Why are your random performance numbers lower than IOmeter?
A17: Each tag in Xperf runs in its own thread. The random number
generator, unless told otherwise, inherits the random seed from the
launching process. With 4 tags all following the same "random" pattern
on the same disk, we were seeing a scenario where the first tag would
do the actual IO with head movement, but the following 3 (which were all
asking for the same data) would simply pull it out of cache. This resulted
in unrealistically high performance numbers. After fixing this bug by
having each tag using its own random seed, performance decreased as it should.
We suspect that IOmeter has each tag following the same sequence, and as a result
is showing bogus performance data which only gets better the more tags that get
added, rather than experiancing diminishing returns for added tags.
Q18: Where is the documentation?
A18: Documentation is available in pdf format
in the install directory. Use Acrobat Reader 4.0+ to view the documentation.