Educating yourself does not mean that you were stupid in the first place; it means that you are intelligent enough to know that there is plenty left to 'learn'. -Melanie Joy

Saturday, 31 January 2015

Secure way of deleting files in Unix

January 31, 2015 Posted by Dinesh , , , , , ,

As I mentioned in the previous post there are simple techniques that can recover your deleted file (by using simple 'grep' or 'strings' utilities).
Even some data recovery tools does the same thing. So if you want to delete some data on the disk without being worried about the retrieval, then
you should probably over write the disk which has your file content.

shred utility available in Linux does the same thing.
shred actually overwrite the specified file repeatedly, in order to make it harder for even very expensive hardware probing to recover the data.
By default shred overwrites the file 25 times with the junk data.

you can chose to remove the file after over writing using -u (unlink)
There are multiple options with shred to explore.

$ shred -u filetobedeleted.txt

Just to see how it works, let say my script is writing some data to the file 'testlog.log' repeatedly after every 1 min.
I am tailing the file in one terminal. And in other terminal I did execute shred.
 
$ sh writetodisk.sh &
$ tail -f testlog.log
aaa
bbb
ccc
ddd


$ shred testlog.log

Now observe the terminal one

$ tail -f testlog.log
aaa
bbb
ccc
ddd
\{XÁÀà_ç æIƒòDÊ5žq­Æ 8<TÝõ ¬ S õŸt1’ïNÐ , éM‚?$Väé@. l"®ÎþÌÕæ ‡Ù+Ž’ bªO"× #f©ÎçN‰/h÷¡çÊhÇöŸz!*ÀA?RAo%æ} ÛZ½PSàpû7Íû3U_ ’e^u÷züê¾Ú6󚶄Ë[Fœ;½êê±î÷]¤¥ˆi                                                                                  ÕÎ8ƒ:SÎq3®B h€'Q“ãªF¹X‘Q'†GÁ–oõ»hï eþ:½U4Úy_£È‘”f}"J_ŠÒ‡±Ê0íÕwº }rºŸoÇpÜ Wá‚À°xfeÒ?ÕC·         ‰JðhJë ™ÀQêM]ÞÑÅ,A {9b ÑùÇ@©}ÅŠ½°Ò¡øÜK-òõ ªLoLƒü
GýÑeÈ#WsG`Þ¼µÅ"–> T/~ [ºÝ ¸ýŒ<C8îzD±š¨š J
#Lwk{lû´köAٍ^0ê(9¿Ó Xnš¼¼ýc+7×Ãó ‡@ ;¥
                                        ŽBýˆÔ
                                              ÀF ⍠?’‰´q’+iQ‰ Y¸¯`± {·;²&%6ÈÄLYdù½­ š¼ÑÖi…ö±É* ÝÜ(Y2Ðc FÔ]þŠ ˜° ˜ƒTãðõ,l‚šl„bÜ8Å òU='µ YR™&iõqmôT ¤¿)“G[¡9îÎD ÉšDÒ–„xFÀjKNs„)½3̆^¹°w

you can see that the shred filled the contents of the file with garbage data.



How to recover a file that was removed using 'rm' command ?

January 31, 2015 Posted by Dinesh , , ,

In Unix like file systems, the system uses 'hard links' to point to piece of data that you write to the disk.
So when you create a file, you also create its first hard link. You can create multiple hard links using 'ln' command.
When you "delete" a file using rm command, normally you are only deleting the hard link.

If all hard links to a particular file are deleted, then the system removes only the reference to the data and indicate that the blocks as free. But it won't actually delete the file.

If your deleted file is opened by any running process then it means you still have one link left to your file !!.
check if any process who works on your file using lsof command

$ lsof | grep "myfile.txt"
COMMAND    PID     USER   FD      TYPE    DEVICE   SIZE     NODE    NAME
pgm-name   7099    root   25r    REG     254,0    349      16080   /tmp/myfile.txt

Using the process and file descriptor you can try copying the file


$ cp /proc/7099/fd/25 /mydir/restore.txt

If lsof didn't list your file then you could try to locate your data reading directly from the device.
But this works only if blocks containing your files haven't been claimed for something else.

To make sure no one else over writes that free blocks, immediately remount the file system with read-only and then search for your file.

$ mount -o ro,remount /dev/sda1
$ grep -a -C100 "unique string" /dev/sda1 > file.txt

Replace /dev/sda1 with the device that the file was on and replace 'string' with the unique string in your file.
This does is it searches for the string on the device and then returns 100 lines of context and puts it in file.txt.
If you need more lines returned just adjust -C options as appropriate. Alternatively you can use -A, -B options with grep to print lines before and after the matched string.

You might get a bunch of extra garbage date, mostly some binary data but you can get your data back.
If you don't want this binary data then you can apply 'string' on the device and grep for the unique string

$ strings /dev/sda1 | grep -C100 "unique string" > file.txt



Thursday, 29 January 2015

Calculating time differences in python

January 29, 2015 Posted by Dinesh , , ,


Recently I had a situation where I need to calculate the timer efficiency.
I have a C++ timer that calls my function after every 5 sec. My function does some critical operation and logs one statement to syslog.
When I observed the logs I found that there is delay in my function execution, slowly the timer started drifting in result and the next function calls are getting delayed !!

So I wanted to calculate how many times the timer has delayed in a day. I grepped for the particular log in my syslog and redirected to a file.

file format is like this:
Jan 29 06:34:24
Jan 29 06:34:29
Jan 29 06:34:34
Jan 29 06:34:39
..
..
Now I should compare the lines in the file and log if the time difference is greater than 5 sec.
compare line 1 with line 2
line 2 with line 3, then line 3 with line 4 and so on...



Here the bad thing is f.readlines() ... It will load whole file in to list and tried to read 2 lines at a time.
If anybody reads this post :P and if you know any better working solution please share. :)