Benchmark Testing - Is the "Windows Experience Index" that MS Windows 7 does a real test? I think not...
*Why would you do this? One theory is that you need to have a baseline before you
start changing parameters to make the computer faster. Another idea is to do
what used to be called a "burn in" by exercising all the components over a
period of time, if there is going to be a failure it will be during this time.
Normally you would use a program that has all the components and the exercises
needed to do the baseline. Some of the tests will take some time to complete
thus making all the motherboard, memory, processor, and support chips function
at a higher level than normal computing would induce.
One reason to do a benchmark testing is to have a starting point for all further test/analysis, diagnostics,
and optimizing the Operating System.
For computers the benchmark is set for the processor, the memory, the FSB (Front Side Bus), hard drives, video, sound, network, and sometimes a CD/DVD player.
Note: Most CD/DVD/Blue Ray drive benchmarks come from the manufacture and are set in the 'firmware' the BIOS of the drive which can not be changed easily so most people take the advertised speeds as the benchmark, very few people have actually tested the drives against the manufacture's spec's.
You can do a benchmark testing manually, or with a program designed to test single components or the complete computer.
You can do the test from DOS [see DOS isn't dead after all] or in a higher level OS such as Apple OSx, Windows, or Linux.
However the best benchmark testing will always be with a Operating System that isn't doing other things while the tests are running, i.e.:
DOS, you can't multitask with DOS, only one program can run at a time...
Once you have your program or if you use your own in-house programs you need to set a baseline for the computer(s).
To set a baseline you will need to have the computer as close to standard as possible, that is not overclocked, minimum memory, a basic computer before doing any upgrades or bumping up the processor speed, increasing the memory, or changing the FSB.
Once you have your baseline you can then begin to add or change components and do your speed tweaking.
Troubleshoot, repair, maintain, upgrade & secure...
If you look at my optimization articles they are about first
Benchmark Testing, to do it you will see I use the Task Manager with a fresh installation of the Operating
System I am optimizing.
By having a baseline you can see what changes work and those that don't when
trying for the best performance.
For the baseline I would suggest you use DOS, because it is a 16 Bit Operating System that doesn't have other programs, services, or libraries running in the back ground. The only draw back to DOS is the size of the memory available for the program, that is DOS can only use 640 K (as in Kilobytes) with an additional 340 K of 'High' memory for loading small parts of DOS and a few utilities.
Another advantage to using a DOS program is that because of the amount of
memory available for the program to run which by today's standards is minuscule.
DOS programs are coded with the oldest programming language: Machine Language
or Assembly Language. This coding is no frills, most of the program is in one
file, there may be a few smaller libraries that are called for special reasons,
such as the processor tests or the hard drive tests. By having these libraries
separated from the main program it makes loading the main program faster.
Higher level Operating Systems (32 and 64 bit) limit the access to hardware
through the drivers it uses to access the hardware where as DOS (16 bit) has
direct access to the hardware.
By using DOS your Baseline testing will be more accurate than say Windows 8
because it is not limited in it's access.
Case in point about higher level OS, some sections or addresses of memory are
locked from access by these Operating Systems, along with some areas of the hard
drive, the same goes for the Video device and monitor. Where as with a DOS
program every component of the computer is accessible for testing.
When you select the device to be tested the program may load that library then perform the test.
But suppose you want to test your computer from say Windows XP, Vista, or Windows 7? You can do that however as pointed out above your baseline has to take in to consideration all the other services, programs, and libraries running even on an OS that has most of the unnecessary fluff turned off.
Some of the better benchmark programs take these necessary services, programs, and libraries in to consideration but that
still is not getting a true baseline or benchmark test.
Suppose you just bought a new computer with say an i5 2.8 GHZ processor, 4 GB of memory, a GeForce GTX 560 Ti video card, a 1 TB Seagate Hard Drive, and the motherboard is proprietary.
You could go to each manufactures web site and download or read up on the specifications for each component before establishing your baseline or if you bought a benchmark program the program may already have the baseline for each component (a high quality program may go get the baseline specifications for you or have them already in a library).
When you establish the baseline first you will be able to tell if the additional memory, tweaking, or overclocking is actually making your computer perform better.
Note: The reason for a baseline is to have a set of specifications you can
compare against if you are tweaking, upgrading, or if you suspect the
performance has dropped off, you do not want to change the "baseline" data or
tweak it to make the testing look better that defeats the reason for doing the