Bienvenido! - Willkommen! - Welcome!

Bitácora Técnica de Tux&Cía., Santa Cruz de la Sierra, BO
Bitácora Central: Tux&Cía.
Bitácora de Información Avanzada: Tux&Cía.-Información
May the source be with you!

Tuesday, September 1, 2015

There is allways a price for it!

"Software is like sex: it's better when it's free!"
Versión subtitulada:

Friday, July 24, 2015

Front Side Bus & PCIExpress guide
There are three main buses in most computers:

1) PCI Bus- The PCI bus connects your expansion cards and drives to your processor and other sub systems. On most systems the bus speed of the PCI bus is 33MHz. If you go higher than that, then cards, drives, and other devices can have problems. The exception to this is found in servers. In some servers you have a special 64-bit (extra wide) 66MHz PCI slots that can accept special high-speed cards. Think of this as a double sized passing lane on a major road that allows higher cars to go through. For information about PCI Express please see the PCI Express Guide.

2) AGP Bus- The AGP bus connects your video card directly to your memory and processor. It is very high speed compared to standard PCI and has a standard speed of 66MHz. Only one device can be hooked to the AGP bus as it only supports one video card so the speed is better compared to the PCI bus, which has many devices on it at once.

3) Front Side Bus (FSB) - The Front Side Bus is the most important bus to consider when you are talking about the performance of a computer. The FSB connects the processor (CPU) in your computer to the system memory. The faster the FSB is, the faster you can get data to your processor. The faster you get data to the processor, the faster your processor can do work on it. The speed of the front side bus depends on the processor and motherboard chipset you are using as well as the system clock. Read on for more information about the Front Side Bus later in this article.
Bandwidth PCI Express: 1x, 2x, 4x, 8x, 16x and 32x all have much greater bandwidth than basic PCI.
Common Buses and their Max Bandwidth
PCI132 MB/s
AGP 8X2,100 MB/s
PCI Express 1x250 [500]* MB/s
PCI Express 2x500 [1000]* MB/s
PCI Express 4x1000 [2000]* MB/s
PCI Express 8x2000 [4000]* MB/s
PCI Express 16x4000 [8000]* MB/s
PCI Express 32x8000 [16000]* MB/s
USB 2.0 (Max Possible)60 MB/s
IDE (ATA100)100 MB/s
IDE (ATA133)133 MB/s
SATA150 MB/s
Gigabit Ethernet125 MB/s
IEEE1394B [Firewire 800]~100 MB/s*

* Note 1 - Since PCI Express is a serial based technology, data can be sent over the bus in two directions at once. Normal PCI is Parallel, and as such all data goes in one direction around the loop. Each 1x lane in PCI Express can transmit in both directions at once. In the table the first number is the bandwidth in one direction and the second number is the combined bandwidth in both directions. Also please note that in PCI Express bandwidth is not shared the same way as in PCI, so there is less congestion on the bus.

Update: Table above contains speeds for PCI Express 1.0 bus. 

For PCIe version 2.0, multiply all bandwidths by 2. 
For example a PCI Express 2.0 16x slot has a max bandwidth of 8000 MB/s one way or 16000 MB/s both ways.
* Note 2 - Firewire 800 has a bandwidth of 786.432 Mbit/s this converts over to between 98 and 99 MB/s.
PCI Express 2.0 runs twice as fast as PCI Express 1.0 or 1.1. However, there is currently no advantage to using a motherboard with PCI-E 2.0 slots, as no card out yet can take full advantage of a PCI-E 1.1 or 1.0 slot. (To my knowledge, there isn't any major difference between PCI-E 1.0 and 1.1. The PCI-E slots on most motherboards unless they're new chipsets like X38/X48/780 are 1.1.) And no, PCI-E 2.0 slots are not called PCI Express X32. They're still PCI Express X16 slots.
Increased bandwidth can be equated into increased system performance. We've long known that to get the most out of your processor you need to get as much information into it as possible, as quickly as possible. Chipset designers have consistently addressed this by increasing Front Side Bus speeds. The problem with this is that front side bus speed increases the speed of transfer between the memory and CPU but often you've got data that's coming from other sources that needs to get to the memory or CPU like drives, network traffic, video, etc. PCI Express addresses this problem head on by making it much faster and easier for data to get around the system.

Q-A Common Questions about PCI Express

Q:Is PCI Express Faster Than PCI?
A:PCI Express is much faster than PCI. For 1x Cards it at least 118% faster. When you compare PCI Express video to PCI Video the difference is enormous: PCI Express 16x video is over 29x faster than PCI Video.

Q:Is PCI Express Video Faster than AGP Video?
A:Yes and No. A 16x PCI Express connection is at least 190% Faster than AGP 8x but this is the connection between the system and the video card. You use the connection the most when your video card is low on memory or when the game you are using uses a Direct X or Open GL feature that isn't supported in hardware.

So, what this means is that in terms of real world performance there may not be a huge difference between AGP and PCI Express if you are talking about identical chipsets. Unfortunately this is very hard to prove because graphics chipsets are designed either for PCI Express or AGP. If you have a card that is available in both forms then you have a graphics chipset that was designed for PCI Express and has a special bridge chip installed to let it communicate with the AGP bus. The short of this is: if two cards of the same chipset are available in AGP and PCI-E then the PCI-E one will always be faster. On PCI-E you don't have the overhead of the bridge chip so it's faster, and you have the better bandwidth so in intense situations such as high resolution gaming you'll come out on top every time.

The main point here is: If you have a system with AGP on it, it doesn't make sense to upgrade just to get PCI-E video right now. The fastest AGP card to ever come out is likely to be the nVidia 6800GT. If you are at a point where that is too slow then by all means it makes sense to make a complete switch. If your happy with your AGP graphics options, wait until you are ready to upgrade the processor or other components before making the PCI-E switch. For more information on AGP and PCI please see the general FSB guide.

Q:What is SLI?
A:SLI or Scalable Link Interface is a technology that lets you take two identical nVidia based graphics cards *that support SLI* and a motherboard *that supports SLI* to achieve a very high level of video performance. SLI works by splitting the rendering of the screen between the two cards- one card renders half, the other card renders the other half. This technique is extremely effective. For instance two 6600GT cards in SLI can do vastly better than a 6800GT or X800 card even though the price is lower for two 6600GT Cards. The downside to this is that SLI is still new and is limited to systems based on AMD 64 / AMD FX Socket 939 processors.

Q:Do I need a special power supply for PCI-E [PCI Express].
A:Yes and no. Although the PCI-E spec calls for a PCI Express power connector, most PCI-E cards don't currently use it. This means that you should only probably worry about this if you are buying bleeding edge PCI Express parts. Card based on the ATI X600, ATI X700, ATI X300, ATI X1300, Nvidia 6600, Nvidia 7600 or Nvidia 7300 series graphic chipsets rarely use the connector. If you are in a situation where you need a PCI Express power connector but the power supply doesn't have one you can always just use a PCI Express Power Adapter that converts a 4-pin molex connector to PCI-E Power.

Conclusion  PCI Express is an exciting advance in the area of computers. Although AGP is now starting to die rapidly, standard PCI is taking/will take longer to die off. Expect to see at least 1 or 2 standard PCI slots along side PCI-E in all motherboards for at least the next 2 years. By that time there will be PCI-E replacements for all common devices such as modems, network cards, raid cards, i/o and more.
DDR2 Short and Sweet There are a few major things you need to know about DDR2 when building a system:
Basic Functionality: DDR2 memory has a different approach to design at the chip level then DDR. The simplest way to understand how it works would be to think that at the low level it had two chips of half the stated memory speed working in tandem together to achieve the full speed stated. So for DDR2 400 it would be something like 2 chips of DDR200 working together to achieve the full 400 speed. Notice that I say "chips" not sticks of memory. All this happens on 1 stick of memory.

The overall effect of this trickery is that manufacturers can scale up the speed of the memory beyond the limits of DDR, with only taking a small hit to the timing of the memory ( how long it takes for the memory to respond back to a request ).

This means that it's possible, and expected to see memory speeds of 533MHz or higher for DDR2. In fact, the current concensus is that if you want to build a system, and have a choice between a motherboard with DDR1 and DDR2 then to see a benefit to DDR2 you need to at least get one speed grade higher than that of the max normal speed of DDR1. ( 400MHz ). This is because the timings ( latency ) of DDR2 are worse than DDR. Essentially in most situation DDR400 ( especially low latency DDR400) is faster than DDR2 400. However, when you get to DDR2 533 the speed boost makes up for the slower timings.

As far as matching FSB to DDR2 speed my recommendations are to skip DDR2 400 and opt for going with the following:

800MHz FSB = DDR2 533MHz ( Ideal ) or DDR2 400MHz ( Matched but Slow. ) 1066MHz FSB = DDR2 667 ( Good ) or DDR2 533MHz ( Matched )

Generally you want to keep the system clock of your memory matching with the root clock of your memory or one step above. So the system clock on a 800MHz FSB P4 is 200 (quad pumped) so that matches DDR2 400 (essentially 200 unimproved) or is good with 1 step up DDR2 533MHz (essentially 266 unimproved). Note however that if you only had a 800MHz FSB processor then DDR2 667 really probably isn't going to help much. Once you pass the 1 step above mark on the memory you have diminishing returns unless you can get to double (DDR2 800MHz).

Compatability: Generally a motherboard is only going to accept DDR1 or DDR2 not both. The slots are physically different and have a different number of pins, however people have been known to force memory into the wrong slots ( And that ends in horrible results! ). Be careful when installing it and make sure the motherboard takes that kind of memory before attempting.

At the time of this writing only motherboards for Pentium4 or Xeon accepted DDR2. AMD Socket 754 and Socket 939 motherboards can't accept DDR2 due to the integrated memory controller in the CPU. AMD is making a line of CPUs that can work with DDR2. They will use new motherboards and have a new socket called M2.

Additional Notes on DDR2: 1) DDR2 is not QDR like I mentioned earlier, the technology is different. 2) DDR2 does give you definate benefits and it is recommended. 3) At the time of this writing ALL motherboards that used DDR2 were Dual Channel Ready. 4) It is not uncommon to hear of problems from people trying to use 3 sticks of DDR2. This stems somewhat from what I mentioned in #3. I recommend either using 1,2 or 4 sticks of DDR2. ( more is ok if you are doing a server but add them in pairs, don't use a odd number of sticks if you can avoid it ).

Saturday, January 24, 2015

error 0x80041003

"SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage >99
The actual vbscript for test.vbs can be found in:
Click the start orb, enter CMD.EXE, hold CTRL+SHIFT, and press enter.  This will launch an elevated command prompt.  Then, type the following command at the prompt and press enter.  Substitute the actual path to test.vbs in the command.
  cscript.exe "c:\path\to\test.vbs"

Wednesday, January 21, 2015

STOP ERROR 0x0000007A

the KERNEL_DATA_INPAGE_ERROR occurs when memory in the kernel's paged pool is swapped to the hard disk and cannot be read when needed at a later time.

Windows broadly divides memory into three pools:

The kernel non-paged pool

The kernel paged pool

The user paged pool
  The kernel non-paged pool contains memory that must be in memory at all times. This includes all of the memory used to handle interrupts, track thread and process states, and handle deferred procedure calls. Memory allocated to the kernel or drivers from this pool will always remain resident in memory at all times, it will never be swapped out to the hard disk drive and referencing memory in the non-paged pool will never generate a page fault (in fact, PAGE_FAULT_IN_NONPAGED_AREA is another famous BSoD).
  The kernel paged-pool contains kernel memory that can be swapped to a page file or swap file located on a secondary storage medium such as a hard disk drive. If memory is infrequently used, it may be swapped out. If the kernel references memory located in the kernel paged-pool it will generate a page-fault interrupt which will bring it back into the physical memory (note that as I mentioned above, the page-fault interrupt handler operates only on non-paged memory).
  The user paged-pool contains memory that can be allocated to user processes.

To best address your question, imagine a scenario when an application causes a page fault by referencing memory that has been swapped out to a page file that is located on a secondary storage device that has been removed or is inaccessible. That memory can no longer be found, so the application must be terminated as it can no longer proceed.

The same is true when the kernel causes a page fault by referencing memory that has been swapped out to a page file that is located on a secondary storage device that has been removed or is inaccessible. The kernel cannot proceed to a consistent state, so it must terminate and halt the machine.

If you have any removable storage devices, make sure that they are configured as such. If they contain page files (pagefile.sys, swapfile.sys, hiberfile.sys) they may unwittingly be used to store memory and that memory will go with them when they are removed. Your machine will run uninhibited until something references the memory that has now disappeared at which point the referencing task will fail.

Your hard disk drive and SSD may also be failing.  
CHKDSK /R will scan for bad sectors but it won't catch non-deterministic read errors and long-latency events. Check the health of the drive using a vendor supplied S.M.A.R.T monitoring tool.
**if you get a inpage error and do not get a memory dump: Look for a drive disconnection issue**
a common cause of a kernel inpage error would be a disconnection of a drive.
This can be caused by some simple issues, Thermal expansion/contraction of the sata cable can make and break a connection several times a second. You might consider checking the cable connections for your drive with on the sata port.
-There can be bugs in the chipset drivers for the sata port ( update chipset drivers, or change the cable to connect to another Sata chipset if you have 2.
-there can be bugs in the BIOS setup for the electronics on your board (update the BIOS)
- special issues involving solid state drives. where the SSD gets behind on it cleanup routines and takes too long to respond causing windows to reset the port. and the drive does not reconnect.
- sometimes you can enable hotswap in BIOS on the sata port of the drive and it will reconnect (hides the disconnect issue but you don't bugcheck)
- sometimes you have to update the firmware of the SSD because of bugs that cause the drive not to respond correctly (often caused by using image software to install the OS)
- sometimes you can boot your machine into BIOS and leave the SSD drive powered, after 5 mins the cleanup routines in the drive will start and it can fix the errors. (just leave it powered and not in use for a few hours)
[most of the time you will see a event viewer error from the disk subsystem that indicates that the sata port was reset]

You can also get inpage error with some physical problems with the memory sticks (basically, thermal breaks on certain pads where the memory chips connects to the circuit board. The defect is hard to detect because and most people will think it is related to when they first turn on the machine but it occurs due to the heat cycle of the memory stick. (when the board is cool, the circuit contracts and opens a address line (line disconnected therfore wired to 0 logic) drivers are loaded into the memory then the memory chip heats up in 5 to 7 seconds circuit and the connection is made and the address line gets set correctly. This can cause a whole block of memory to move after a device driver was loaded into it.
 Results is memory corruption that only occurs when the memory is cool. And the memtest86 would not find this type of failure.
The error code KERNEL_DATA_INPAGE_ERROR STOP: 0x0000007A states that the requested page of kernel data from the paging file could not be read into memory. It appears that there are some issues with the hard disk.
Method 1: I would suggest you to try the steps provided below and check if it helps.
a)       Click on Start, type Command Prompt, right click on it and select to run as administrator.
b)       In command prompt, type chkdsk volume: /f /r and hit enter to check and repair any volume disk errors or bad sectors present on the drive which would be causing this problem.
try ssd life  :

download UBCD, and run memtest86+ for ram (couple of passes)                 
For an SSD - check the manufacturers site for a diagnostic tool, or post the model here
eg, for an intel SSD:
Intel® Solid-State Drive Toolbox
Intel® Solid-State Drive Firmware Update Tool