Bienvenido! - Willkommen! - Welcome!

Bitácora Técnica de Tux&Cía., Santa Cruz de la Sierra, BO
Bitácora Central: Tux&Cía.
Bitácora de Información Avanzada: Tux&Cía.-Información
May the source be with you!

Monday, May 19, 2008

Recuperación de datos

http://www.advanceddatarecovery.co.uk/raidrecovery.html
Laptop Recovery

Your Laptop will produce error messages as below:

  • Primary Hard Disk not found
  • CRC errors when you attempt to copy files
  • Invalid or corrupt FAT
  • Invalid partition table entries
  • System Virus attack
  • LED on external hard drive flashing
  • Cannot find file or program
  • Primary/Secondary Hard disk failure
  • Windows or Mac corrupt directory listing
  • Hard drive is making clicking or ticking noises
  • LED on external hard drive flashing
  • Hard drive beeps repeatly
  • Hard drive is quiet, probably means motor is not spinning
  • Master Boot Record Not Found
  • Non system disk or disk errors

Your Laptop may not recognise the drive, or is making extraordinary noises - If the hard disk is making ANY strange noises, then switch the PC off immediately. We normally can successfully recovery data and, in many cases, return it within 24 hours!

Mac Recovery:

All Mac Hard Drive Recovery is now £299.99. No fix no fee

Your Mac Drive will produce error messages as below:

  • Not an HFS volume
  • Drive not installed
  • The disk <drive&amp;gt; cannot be accessed
  • Segment Loader Error
  • Bad file name
  • Not a Macintosh disk
  • Bad master directory block
  • Cannot access because a disk error occurred
  • Directory not found
  • No such volume

Your Mac may not recognise the drive, or is making extraordinary noises - If the hard disk is making ANY strange noises, then switch the Mac off immediately. We normally can successfully recovery data and, in many cases, return it within 24 hours!

Step 1

Send us your Hard Drive to our postal address; details are below and on our contact page. Send the Hard Drive via a suitable postage service along with your Contact Details (Full Name, Full Postal Address, Phone Number & Email Address).

Our Address:
Advanced Data Recovery
Step 2
Our diagnosis will confirm the level of data recovery that we can attain
from your Hard Drive, we will contact you ASAP to determine what data you want recovered.
Step 3
We will recover the data you want and copy the contents onto a suitable media.
Step 4
Step 1
Send us your Hard Drive to our postal address; details are below and on our
contact page. Send the Hard Drive via a suitable postage service along with your Contact Details (Full Name, Full Postal Address, Phone Number & Email Address). We recommend Royal Mail Next Day Special Delivery. The average costs are £4.50 via this delivery method and the contents are fully insured up to a value of £500.

Our Address:
Step 2
Our Diagnosis will confirm the level of data recovery that we can attain from your Hard Drive, we will contact you ASAP to determine what data you want recovered.
Step 3
We will recover the data you want and copy the contents onto a suitable media.
Step 4
We send the recovered data back to you via a Next Day Courier Service. Our aim to send the recovered data back to you within 48/72 hours.

Open Source Software -Windows

Saturday, May 17, 2008

Old PCs, more

Source
For the guy with 486 question try libranet 2.7 or 2.81 if u can find these iso..
i found 2.7 to be friendliest to old hardware
http://iso.linuxquestions.org/libranet/libranet-2.8.1-trial-version/
http://www.frozentech.com/content/livecd.php

you might also like to see Fluxbuntu as well. It's lightweight and really nice. And it comes with the right packaging/repository system
DSL uses a 2.4 kernel. I'd bet that Xubuntu was pretty good on such a
system, barring the question of it having a 2.6 kernel. But then again,
Fluxbuntu would have the same kernel ...

Pentium MMX laptop (233MHz, 64MB Ram) and have been running DSL on it for a while. Works great.
I'm also planning on trying Fluxbuntu (Ubuntu with the lighter Fluxbox desktop environemnt), TinyMe (a lighter version of PCLinuxOS) and a minimal install of Ubuntu (or Debian). Puppy would be my other choice.

Old PCs and Laptops

The typical specs:
Video card 2MB
CD drive

V.90 Modem

USB

Two PCMCIA slots

64 MB RAM

4 GB Hard Drive

Intel Celeron 400 MHz


It's currently running Windows 2000 Pro.

You can't (or shouldn't) run XP. Or even 2000. 64 MB is the minimum
requirement for both systems, and you're going to be seriously hampered
by a lot of swapping with either. You could try installing XP Lite,
but even so... running something like Firefox is absolutely a no-no. If
you could throw another 64 MB in there you'd be in better shape, but in
reality for any modern operating system 256 MB is the bare minimum you
should be using.
Windows Fundamentals for Legacy PCs, which is so cool that Microsoft doesn't sell it to customers Windows FLP still requires 64MB RAM minimum. It really doesn't decrease the
memory footprint all that much. You get hosed up as soon as you want to
run more than one application at a time.I feel like I'm beating a dead horse here, but WinFLP, Win2K, and WinXP
all have the exact same memory and processor requirements, both minimum
and recommended. WinFLP, quite frankly, only reduces the hard disk
footprint. It's really not a good OS recommendation for what he has.
None of the above will run half decent with 64 MB of memory, I
guarantee it.

Xubuntu would fly on it. I ran it on an old 233 mHz G3 iMac and was very impressed with responsiveness.
Even the default Xubuntu install won't work without more RAM. There is
an alternate install you can use, and Xubuntu will run on 64 MB once
installed, but you'd really need at least 128 MB.
Fluxbuntu is quite nice, and would run on that.




I have run both DamnSmallLinux and VectorLinux on a laptop with half that much memory

Dell Latitude D233ST laptop

Mobile Intel Pentium II microprocessor 233 MHz


4.3 G hard drive


64-MB RAM


64 MB is going to hurt, but if you use XFCE4 instead of Gnome, it should be okay.
Ubuntu can run XFCE and Fluxbox, too.
Damn Small Linux is built for modest specifications.
Try VECTOR LINUX. it's a slackware based distro built especially for machines with less resources http://www.vectorlinux.com/

oh, 233. Well I run debian with fluxbox on a 200mhz box. It's impressive...but not modern.

Linux distros for older hardware

Friday, May 16, 2008

Bit-by-bit cloning

Similar hard disk:
sudo dd if=/dev/hda of=/dev/hdb


For fast copying between filesystems on a linux box, try this:

cd /source-dir
find . -xdev -print0 | cpio -pdvum0 --sparse /dest-dir
-----------------
Cloning also doesn't let your Tivo use all the available space on the new drive, which I assume is bigger than the old drive.



Just use mfsbackup/mfsrestore. It will also overwrite the XP boot signature.
The backup Tivo image can easily fit on one CD if you use Mfstools.
You can't use the Windows version of Ghost, as booting Windows 2000 or
XP with the Tivo drive attached is a big no-no. (In fact, unplugging
your XP hard drive is a good idea. ) You'll have to use the Ghost
bootable CD-ROM.



Finaly, the Linux "dd" command is truly a bit-by-bit copy and has been verified to work for Tivo drives.
I used an older version of Ghost to binary copy an 80G TiVo drive. That worked the process was really slow it took over 8hrs. The Linux cp command using DMA was much faster If i recall that took
less then 4 hours. Ghost at least had a process indicator. Back then I
did not know about the dd command back then.
----------------
To clone a hard drive of 7G capacity (ATA-33) to a new drive of 40G (ATA-100) with 'dump and restore' commands as follows;
(Remark: old and new drives of unequal capacity)

1)
connect ATA-100 drive as Primary slave

2)
# fdisk -l (to find the partitions on ATA-33 drive, e.g.)
Code:

Device Boot System
/dev/hda1 * Linux
/dev/hda2 Linux
/dev/hda3 Linux swap
/dev/hda4 /var Linux
etc.


3)
Make partitions on ATA-100 drive same as ATA-33 drive but of different capacity
# fdisk n /dev/hdb
etc.

4)
Then make following mount points (still on ATA-33)
# mkdir /mnt/boot
# mkdir /mnt/root
# mkdir /mnt/var

5)
# mount /dev/hdb1 /mnt/boot
# mount /dev/hdb2 /mnt/root
(/dev/hdb3 is swap)
# mount /dev/hdb4 /mnt/var

6)Do follows;
Code:

# dump –f - /boot | (cd /mnt/boot; restore –rf -)
# dump –f - / | (cd /mnt/root; restore –rf -)
# dump –f - /var | (cd /mnt/var; restore –rf -)
Is the command lines correct?

----------------
Copied from alma.ch:
Cloning NTFS partitions --&gt; ntfsclone.
It worked perfectly, and was pretty fast (using Gigabit
Ethernet).
Ingredients:
  • The machine to clone which I will boringly call Master.
  • The
    new clone, unsurprisingly called Slave here, and which is assumed to be
    made of identical hardware, particularly the hard disk.
  • Gigabit Ethernet.
  • A server for the Master disk image
  • A Linux "Live CD" (I used Knoppix 3.8.1, which had ntfsclone version 1.9.4).

The Backup:

Boot Master from the Linux live CD

Open a root shell

Set my swiss keyboard layout if needed
# setxkbmap fr_CH
or in newer versions:
# setxkbmap ch fr or setxkbmap ch de


Check if the network is up
# ifconfig eth0

It wasn't for me, and DHCP tended to fail for some reason, so I configured it manually:

# ifconfig eth0 192.168.1.27
# echo nameserver 192.168.1.4 &gt; /etc/resolv.conf
# echo search example.lan &gt;&amp;gt; /etc/resolv.conf
# route add -net 0.0.0.0 gw 192.168.1.100

The machine displaying a stupid time and time zone, I also did
# tzselect
and pasted the string it suggested on the command line,
# TZ='Europe/Zurich'; export TZ
and then set the clock:
# ntpdate pool.ntp.org
# hwclock --systohc

This
was not really necessary, but I noticed that the file times on the
server would be wrong if the client had a wrong time and/or time zone.

And now the real stuff:

Create a mount point
# mkdir /tmp/server

Mount
the server's share. I used a share called diskimages on a Samba server,
but it could have been Windows, an NFS server, or whatever.
# mount -t smbfs -o username=my_user_name   //server_name/diskimages /tmp/server&lt;/pre&gt;&lt;br /&gt;Check how your live CD called the partitions you want to save&lt;br /&gt;&lt;pre&gt;# cat /proc/partitions&lt;br /&gt;major minor  #blocks  name&lt;br /&gt;&lt;br /&gt;8       0   78150744  sda&lt;br /&gt;8       1   20482843  sda1&lt;br /&gt;8       2          1  sda2&lt;br /&gt;8       5   20482843  sda5&lt;br /&gt;8       6   37182411  sda6&lt;br /&gt;180     0     253952  uba&lt;br /&gt;180     1     253936  uba1&lt;br /&gt;240     0    1939136  cloop0&lt;/pre&gt;&lt;br /&gt;I want to save that 80 GB disk sda, which has a primary partition sda1,&lt;br /&gt;and an extended partition sda2 containing logical partitions sda5 and&lt;br /&gt;sda6. So what I want to save is sda1, sda5 and sda6.&lt;br /&gt;First I saved the partition table and the Master Boot Record&lt;br /&gt;&lt;pre&gt;# &lt;code&gt;sfdisk -d /dev/sda &gt;/tmp/server/master-sfdisk-sda.dump&lt;br /&gt;# dd if=/dev/sda bs=512 count=1 of=/tmp/server/master-sda.mbr&lt;/code&gt;&lt;/pre&gt;&lt;br /&gt;and then the partitions:&lt;br /&gt;&lt;pre&gt;ntfsclone -s -o - /dev/sda1   | gzip | split -b 1000m - /tmp/server/master-sda1.img.gz_&lt;br /&gt;ntfsclone -s -o - /dev/sda5   | gzip | split -b 1000m - /tmp/server/master-sda5.img.gz_&lt;br /&gt;ntfsclone -s -o - /dev/sda6   | gzip | split -b 1000m - /tmp/server/master-sda6.img.gz_&lt;/pre&gt;This&lt;br /&gt;is where I fell into the first trap. My Samba server doesn't seem to&lt;br /&gt;accept files larger than 2 GBytes! That is why the output is piped&lt;br /&gt;through split. I still don't know why I cannot write files larger than&lt;br /&gt;2 GB, and if you do, please let me know. This is a Samba 3.x server&lt;br /&gt;running on Debian with a 2.6.x kernel, and the share is on a 36GB ext3&lt;br /&gt;partition.&lt;br /&gt;(update:  &lt;a href="http://www.blogger.com/post-edit.g?blogID=9195951&amp;postID=111424681646035348#c111716005006438346"&gt;this comment&lt;/a&gt;&lt;br /&gt;suggests to add the lfs option to smbfs mount. This allowed me to write&lt;br /&gt;more than 2 GB, but not more than 4GB. Probably because it's a FAT32&lt;br /&gt;partition)&lt;br /&gt;&lt;br /&gt;Anyway, split solved that problem nicely, chopping&lt;br /&gt;the output into 1 GB files, but I had added gzip in the hope of making&lt;br /&gt;things faster, and that gzip and split combination bit me later. And&lt;br /&gt;I'm not even sure that the gzip overhead is worth the bandwidth saving.&lt;br /&gt;Gigabit Ethernet can be really fast. In fact, it can be faster than the&lt;br /&gt;hard disks. That may be worth benchmarking some time. (I also tried&lt;br /&gt;bzip2, which has better compression, but that was excruciatingly slow).&lt;br /&gt;&lt;br /&gt;That's it for the backup. Now, to the next part:&lt;br /&gt;&lt;h3&gt;The Restore:&lt;/h3&gt;Boot Master from the Linux live CD, and proceed as for the backup:&lt;br /&gt;&lt;br /&gt;&lt;code&gt;setxkbmap fr_CH&lt;br /&gt;ifconfig eth0 192.168.1.27&lt;br /&gt;echo nameserver 192.168.1.4  &gt;  /etc/resolv.conf&lt;br /&gt;echo search     example.lan  &gt;&gt; /etc/resolv.conf&lt;br /&gt;route add -net 0.0.0.0 gw 192.168.1.100&lt;br /&gt;TZ='Europe/Zurich'; export TZ&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;&lt;code&gt;ntpdate pool.ntp.org&lt;br /&gt;hwclock --systohc&lt;/code&gt;&lt;br /&gt;&lt;pre&gt;mkdir /tmp/server&lt;br /&gt;mount -t smbfs -o username=my_user_name   //server_name/diskimages /tmp/server&lt;/pre&gt;(I just copied/pasted this whole block into the shell)&lt;br /&gt;&lt;br /&gt;Check your partitions again, and make sure you will not overwrite some other disk!&lt;br /&gt;&lt;br /&gt;# &lt;code&gt;cat /proc/partitions&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;Now I first restored the partition table and the master boot record&lt;br /&gt;&lt;br /&gt;# &lt;code&gt;sfdisk /dev/sda &lt; /tmp/server/master-sfdisk-sda.dump&lt;/code&gt;&lt;br /&gt;# &lt;code&gt;dd if=/tmp/server/master-sda.mbr of=/dev/sda&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;And&lt;br /&gt;then the partitions. Since I had several files produced by split for my&lt;br /&gt;primary partition, I needed to take them all, in the right order of&lt;br /&gt;course. split adds "aa", "ab", "ac", etc. to the end of the file name.&lt;br /&gt;&lt;br /&gt;# &lt;code&gt;ls -l /tmp/server&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;will help you check which files you need&lt;br /&gt;&lt;br /&gt;This is where the second trap got me. gunzip's documentation led to believe that I could do something like &lt;code&gt;gunzip -c file1 file2 file3 | ntfsclone ... &lt;/code&gt;which would be the same as &lt;code&gt;cat file1 file2 file3 | gunzip -c | ntfsclone ...&lt;/code&gt;&lt;br /&gt;&lt;br /&gt;Well, &lt;em&gt;it is not the same&lt;/em&gt;, and my first tries would result in the process aborting after a (long) while, with the error "gunzip: unexpected end of file".&lt;br /&gt;&lt;br /&gt;Eventually, it worked:&lt;br /&gt;&lt;pre&gt;cd /tmp/server&lt;br /&gt;cat master-sda1.img.gz_aa master-sda1.img.gz_ab master-sda1.img.gz_ac    | gunzip -c | ntfsclone --restore-image --overwrite /dev/sda1 -&lt;br /&gt;cat master-sda5.img.gz_aa    | gunzip -c | ntfsclone --restore-image --overwrite /dev/sda5 -&lt;br /&gt;cat master-sda6.img.gz_aa    | gunzip -c | ntfsclone --restore-image --overwrite /dev/sda6 -
Reboot into your new Windows XP clone.

Now I wonder if there is anything I overlooked with machine IDs (SID?) and such, but I haven't seen a problem so far.
Do I need to do something, to change the SID of the clone?

If
you don't need to save the image and want to be faster, you could of
course combine this method with netcat and skip the server.

Note:the NFS solution is the easy one ...
but if you want 2 get REAL speed out of your gear you shoud use netcat &amp; DD

the NFS trick got me to max 17MB/sec under gigabit network

netcat does it @ 34MB/sec , over the same gear ... (do the math its 1GB @30 SEC only !!!)

stop using NFS - netbios/SMB are bad 4 you

Note 2:
There is no requirement for ntfsclone to restore to the same size
partition. Other than having enough space of course! Create the
partitions manually if you wish and then restore to the one you want.
Note
that you may have issues if the first block of the partition is in a
different place - ie: if you generated the image from /dev/hda1 and
restored to /dev/hda2... Not sure, but I think windows is a bit dim...

Note 3:
Firstly, use "mount -t cifs". Smbfs is deprecated. And a hack.

Second,
you can restore an ntfs image to a larger partition - create it in
fdisk and make sure it has type id of 7 - and then simply use
"ntfsresize /dev/hda1" or whatever. That's magic that is!

Third,
restoring a bootable windows to a different partition is slightly
painful. Having a bootable XP/W2K cd will help a lot - fixboot and
bootcfg but you can help yourself by pulling extra lines for partitions
1,2,3,4 in the boot.ini before starting!

I'm not sure if fixboot will fix up the "start offset" in the ntfs partition boot sector but I did it by following this.

Essentially
you change the dword at offset 1c to give the start sector number (get
it from "fdisk -ul"). Mine was /dev/hda1 and start sector was 63, or
0x3F. So I put the numbers 3f,00,00,00 (little endian order) in from
offset 0x1c. I did it without a hex editor but you don't want to know
how! (dd and vi -b!!!)

Note 4:
Do this with Knoppix here.

----------
To change the SID



GNU Parted
Mondo
Partimage
G4U Imager
Cloning Hard Drives in FreeBSD Linux Windows NetBSD OpenBSD


Other note:
Ghost sees a Reiserfs partition as if it were full of data wether it is or not. If the partition is 70GB then Ghost will think it has a full 70GB of data on it, even if you have less than 40GB in reality. There are other image-backup apps out there that handle Reiserfs correctly but I've always used Ghost because that is what has been available to me. Ext(x) works fine. I haven't tried other filesystem types. Ext2,Ext3,vfat work like a charm.

Thursday, May 15, 2008

La huella digital

Todo archivo tiene su propia huella digital, única y sumamente difícil de reproducir o clonar, todo serio conocedor de Linux o Unix conoce los mecanismos de hash de un archivo. Yo lo conozco desde mediados del 1995 y lo uso desde mi primer descarga de un archivo ISO bajo Linux kernel 1.2
Theoretically, MD5 and SHA1 are algorithms for computing a 'condensed representation' of a message or a data file. The 'condensed representation' is of fixed length and is known as a 'message digest' or 'fingerprint'.
The MD5 hash, also known as checksum for a file is a 128-bit value, something like a fingerprint of the file. There is a very small possibility of getting two identical hashes of two different files. This feature can be useful both for comparing the files and their integrity control.

Hash length
The length of the hash value is determined by the type of the used algorithm, and its length does not depend on the size of the file. The most common hash value lengths are either 128 or 160 bits.

Non-discoverability
Every pair of nonidentical files will translate into a completely different hash value, even if the two files differ only by a single bit. Using today's technology, it is not possible to discover a pair of files that translate to the same hash value.

Repeatability
Each time a particular file is hashed using the same algorithm, the exact same hash value will be produced.

Irreversibility
All hashing algorithms are one-way. Given a checksum value, it is infeasible to discover the password. In fact, none of the properties of the original message can be determined given the checksum value alone.

The algorithm was invented by:
Professor Ronald L. Rivest (born 1947, Schenectady, New York) is a cryptographer, and is the Viterbi Professor of Computer Science at MIT's Department of Electrical Engineering and Computer Science. He is most celebrated for his work on public-key encryption with Len Adleman and Adi Shamir, specifically the RSA algorithm, for which they won the 2002 ACM Turing Award.

Un método de comprobar la integridad de archivos que podrían ser cambiados por virus, muy común en el mundo Unix y Linux, usado también por determinados programas antivirus, con el siguiente mecanismo:

Each file will be assigned internally a tab called "File Hashes" The tab contains the MD5, SHA1 and CRC-32 file hashes. These are common hashes that are used to verify the integrity and authenticity of files. Many download sites list the MD5 hash along with the download link.

For instance when you download or receive a file, you can use MD5 or SHA-1 to guarantee that you have the correct, unaltered file by comparing its hash with the original. You are essentially verifying the file's integrity

The Secure Hash Algorithm Directory


SHA-1: The Secure Hash Algorithm (SHA) was developed by NIST and is specified in the Secure Hash Standard (SHS, FIPS 180). SHA-1 is a revision to this version and was published in 1994. It is also described in the ANSI X9.30 (part 2) standard. SHA-1 produces a 160-bit (20 byte) message digest. Although slower than MD5, this larger digest size makes it stronger against brute force attacks.

In both cases (sha-1 or md5), the fingerprint (message digest) is also non-reversable.... your data cannot be retrieved from the message digest, yet as stated earlier, the digest uniquely identifies the data.

The md5 sum of a (text) file (message or email)
Message Digest 5 is a standard algorithm which takes as input a message of arbitrary length and produces as output a 128-bit fingerprint or message digest of the input.
Somewhat similar in general concept to a CRC (cyclic redundancy check), the MD5 algorithm is used as part of the SNMPv3 (simple network mail protocol version 3) security subsystem.

Informática forense - Interpol

Interpol Marzo 3 de 2008 - 15 mayo 2008
Evidencia a analizar (coherencia, manipulación, borrado, cambio de fechas y contenido, etc.):
3 laptops
2 discos externos
3 flash memory disk.

Resultados:
600 GB de datos
Entre ellos (fuera de los correspondientes al sistema operativo y las aplicaciones):
7.989 direcciones de correo electrónico
22.000 páginas web
37.872 documentos de texto
452 hojas de cálculo
>200.000 fotos
10.537 archivos multimedia (video, sonido)
983 archivos encriptados (protegidos por contraseña o con algoritmos a nivel de aplicaciones)
"Tomaría más de 1000 años leer todo si se leyeran 100 páginas por día." ---Secretario general de Interpol
Total de accesos a los equipos desde la instalación del sistema operativo: 48550 veces (lo que indica que no pueden ser equipos nuevos comprados y con archivos creados recientemente para meter evidencia falsa en miles de reinicios del sistema operativo).
64 especialistas de 15 países trabajaron más de 5000 horas analizando estos archivos. Se crearon dos teams al mando de dos especialistas en informática forense (uno de Singapur y de Australia) analizaron más de 1000 horas y de forma independiente la integridad de los archivos, su estructura y demás datos digitales de cada uno de las cuatro evidencias asignadas.
Los especialistas informáticos usaron 10 ordenadores en red durante dos semanas (día y noche) para "crackear" los 983 archivos que fueron codificados y protegidos por contraseña para evitar acceso no autorizado a ellos por su autor.

Las evidencias fueron analizadas con muchos métodos y algoritmos, se les dió por ejemplo 59 documentos usados en conferencia de prensa por el gobierno colombiano tras el bombardeo y los especialistas de informática forense (que no saben español) encontraron las rutas y los hash md5 que indicaban exactamente el mismo contenido.

Interpol, tras 5000 horas de trabajo analizando la evidencia, no encontró ninguna alteración en toda la evidencia electrónica analizada.
El hash md5 de cada documento está analizado y determinado con fechas de cración, acceso y modificación.
Y todo guardado para futura referencia y evitar que alguien ponga en duda su existencia o fidelidad.
Nadie puede poner nunca en duda que esta evidencia haya sido alterada, borrada o aumentada.
La Interpol certifica que está 100% segura que la evidencia analizada es de (a) Raúl Reyes y fueron sacadas de un campamento terrorista!
Me imagino que se hicieron muestras de polvo, polen y restos de explosivos que sólo puede haber en el campamento terrorista bombardeado. Pero los zurdos implicados van a poner un montón de cosas en duda...
No lo duden!

Ver algo al respecto

Husos Horarios

Monday, May 12, 2008

Old tech in high tech

Datos rescatados del Columbia siniestrado
Denn konzipiert wurden die Shuttles in den 70er-Jahren. Bis heute kommen in der Telemetrie der Shuttles Chips der ersten Generation, wie man sie in PCs Anfang der Achtziger fand, zum Einsatz. Vor Jahren schossen etwa die Preise für die uralten 8086-CPUs (4,77 MHz, eingeführt 1978) bei Ebay in die Höhe, nachdem bekanntgeworden war, dass die Nasa die dort einkaufte: Frische Ware dieser Art gibt es seit Mitte der Achtziger nicht mehr. Sie werden bis heute in den restlichen Shuttles eingesetzt.

Und offenbar ist auch die Software nicht viel moderner: Die Columbia-Festplatte war von einem Rechner beschrieben worden, der unter DOS lief, einer aus den Sechzigern stammenden, in den Achtzigern zum PC-Betriebssystem weiter entwickelten und immerhin bis 1994 eingesetzten Software. Genau das aber erwies sich als Glücksfall: Während Windows-Rechner ihre Daten überall fragmentiert quer über die Festplatte ablegen, wo sie gerade Platz finden, schreibt DOS Daten "am Stück" - und zwar vom Inneren der Platte nach außen.

Daten verloren?

DATENRETTUNGSPROGRAMME
Das Datenrettungstool EasyRecovery lässt sich unter ontrack.de/easyrecovery kostenlos downloaden. Um wiederherstellbare Dateien speichern zu können, muss aber eine Lizenz erworben werden.

Sie kostet 213 Euro. Die sogenannte Light- Version des Programms kostet 94 Euro, ist jedoch auf die Wiederherstellung von 25 Dateien pro Suchlauf limitiert.

GetDataBack gibt es unter runtime.org. Die Lizenz für NTFS- Festplatten kostet hier 94 Euro, für FAT- Festplatten 82 Euro.

Wer nicht weiß, welches Dateisystem seine Festplatte unterstützt, ruft unter Windows den Ordner Arbeitsplatz auf, wählt das Festplattensymbol und lässt sich die Eigenschaften anzeigen... [Festplatte kaput, Dummkopf!]

Das Freeware- Tool PC Inspector File Recovery steht unter pcinspector.de zum Download bereit.

Für die Wiederherstellung von Bildern auf Speicherkarten gibt es unter photosrecovery.com das Programm Digital Photo Recovery für 29 Euro. Ein gleichnamiges Freeware- Tool steht unter foto- freeware.de/digital- photo- recovery.php zum Download bereit. Die kostenlose Version hat allerdings den Nachteil, dass beispielsweise keine Videodateien gerettet, sondern nur als Einzelbildfolge wiederhergestellt werden können.
ddp

Thursday, May 1, 2008