Search


Intel Processor FSB Info :
133 = 533 MHz
200 = 800 MHz
266 = 1066 MHz
333 = 1333 MHz
400 = 1600 MHz
_________________________

Memory Info :
Memory Speeds (MHz)
and Data Tranfer Rates (GB/s) :

DDR1 266 MHz = PC2100
DDR1 333 MHz = PC2700
DDR1 400 MHz = PC3200
_________________________

DDR2 533 MHz = PC 4200
DDR2 667 MHz = PC 5300
DDR2 800 MHz = PC 6400
_________________________
 
DDR3 1066 MHz = PC 8500
DDR3 1333 MHz = PC 10600
DDR3 1600 MHz = PC 12800
DDR3 1866 MHz = PC 14900
_________________________
 
DDR4 2133 MHz = PC 17000
DDR4 2400 MHz = PC 19000
DDR4 2666 MHz = PC 21300
DDR4 2933 MHz = PC 23400
DDR4 3200 MHz = PC 25600
_________________________
Note : 
* The practical data rate is determined by the system's Central Processing Unit and motherboard configuration.
Home arrow Technical Update arrow Operating System arrow Tip: Download Accelerator for Linux
Tip: Download Accelerator for Linux
Tip: Download Accelerator for Linux

( http://linuxhelp.blogspot.com/ )


There are different ways of downloading files from remote locations. While all web browsers support downloading files via http or ftp, I have found that in the case of Firefox (at least), it doesn't support resuming downloads. While this is not a problem for files of small size, when it comes to downloading huge files such as ISO images of Linux distributions, this lack of support for resuming downloads from where you had left off earlier would mean you have to start downloading the file again from the beginning.

The most common and fail safe method of downloading huge files in Linux is to use the wget tool. By using the -c option, you can resume the download of the file at a later stage in the event the downloading fails due to connection time out.

I usually use the following wget command to download Linux distributions.
$ wget -c full_path_to_the_linux_iso
Using Curl to speed up your downloads
But here is a nice tip to speed up your download of files by a significant factor. This tip is applicable for people who have a large bandwidth internet connection in the lines of 4 MBps and above. Usually different mirrors have different speeds. A Linux distribution mirror in say Japan may have a 100MBps connection but another mirror in a different location will be connected only to a 10 MBps pipe. More over these mirrors throttle the bandwidth made available for each individual connection thus providing a upper limit of download speeds.

What you can do is split the file (in our case the ISO image) you are downloading into a number of pieces and download each piece from a different mirror simultaneously. At the end of the download, you can combine all the pieces together to get your file in one piece. This is known as download acceleration where the software connects to more than one location simultaneously and splits the downloads among the locations. This feature is commonly found in many download managers for Windows but is hard to find in those available for Linux.

You can get the same feature in Linux too by using the program curl to download your files. Let's say I want to download the Ubuntu ISO image of size 700MB. I split the download into 4 parts as follows :
$ curl --range 0-199999999 -o ubuntu-iso.part1 $url1 &
$ curl --range 200000000-399999999 -o ubuntu-iso.part2 $url2 &
$ curl --range 400000000-599999999 -o ubuntu-iso.part3 $url3 &
$ curl --range 600000000- -o ubuntu-iso.part4 $url4 &
Where url1, url2, url3 and url4 are as follows ...
url1=http://ubuntu.intergenia.de/releases/feisty/ubuntu-7.04-desktop-i386.iso
url2=http://ubuntu.virginmedia.com/releases/7.04/ubuntu-7.04-desktop-i386.iso
url3=http://es.releases.ubuntu.com/7.04/ubuntu-7.04-desktop-i386.iso
url4=http://ftp-mirror.internap.com/pub/ubuntu-releases/7.04/ubuntu-7.04-desktop-i386.iso
This creates four background download processes, each transferring a different part of the ISO image from a different server. The --range option specifies a subrange of bytes to extract from the ISO file. The -o option provides the name of the file to which the data is to be saved. So once all the four curl processes finish their download, you will have four files namely ubuntu-iso.part1, ubuntu-iso.part2, ubuntu-iso.part3 and ubuntu-iso.part4 in your current directory.

To get the original Ubuntu ISO file, I just combine the files using the cat command as follows :
$ cat ubuntu-iso.part? > ubuntu-7.04-desktop-i386.iso

As I said earlier, if you have a high speed Internet connection then this method will considerably shorten the time taken to download huge files.
As an aside, I have entered all the commands in a file as follows :
#FILE NAME : da.sh (Download accelerator)

#!/bin/sh
url1=http://ubuntu.intergenia.de/releases/feisty/ubuntu-7.04-desktop-i386.iso
url2=http://ubuntu.virginmedia.com/releases/7.04/ubuntu-7.04-desktop-i386.iso
url3=http://es.releases.ubuntu.com/7.04/ubuntu-7.04-desktop-i386.iso
url4=http://ftp-mirror.internap.com/pub/ubuntu-releases/7.04/ubuntu-7.04-desktop-i386.iso

curl --range 0-199999999 -o ubuntu-iso.part1 $url1 &
curl --range 200000000-399999999 -o ubuntu-iso.part2 $url2 &
curl --range 400000000-599999999 -o ubuntu-iso.part3 $url3 &
curl --range 600000000- -o ubuntu-iso.part4 $url4 &
... set the executable bit of the file and then I run it.
$ chmod u+x da.sh
$ ./da.sh



 

Login






Saya lupa passwordnya?