Before running the
program it is advisable to adjust the general settings.
To do this, launch
the program and choose Default Options.
The first thing to do is decide which directory (new path) you will
use to save project files and the path to the directory for saving files
copied (downloaded) from the Internet.
Unless this option is highlighted the system will only download a list
of scanned hyperlinks into a special file.
Then choose any other options you would like to use in downloading and
searching for hyperlinks.
Follow new links / URL
This option allows you to search for local links on the website you are scanning, i.e. links that refer to other documents on the website.Stay within initial domain list.
For example, you only need to download a list of (URL) addresses
www.internet-soft.com
www.softwarea.com
�
and you don't need to download other domains linked to the original
list of domains (e.g. internet-soft.com)
An example will help to illustrate this option. Let's assume there is a hyperlink from one site to another. There is a link from the second link to the third, etc.
As you can see, a number of hyperlinks must be followed to get from one site to another. This option gives you the greatest possible number of hyperlink steps. Each step enables you to make some hyperlinks with a number of other websites. So if you have selected only one level, you will only be able to copy the websites (let's call them XI websites) to which there is a link on the website you are downloading (scanning), and not the sites with hyperlinks from XI websites.
The following chart shows how the links level limit works.
At the end of this time the
program starts downloading the next document.
This option shows the number
of attempts to download the same file if the provider connection or website
link is broken off. The program will make as many attempts to download
as you specify.
For example, when you download
a page using Internet Explorer 5.0, the remote server performs this operations
and writes the contents of the server as a protocol. The Extractor program
does the same thing when you visit a website.
We would like to draw your attention to the following:
Since the worldwide web contains a huge number of pages great data processing
power may be needed as well as a large amount of disk space on your computer
to download links and websites. A few hours of work by the program may
take up many gigabytes on your hard disk.
You can use these menu options to limit the size of files to be downloaded. If you have selected "Load all file sizes", files of all sizes will be downloaded. Otherwise you will only get the sizes (specified in bytes) you have selected.
The filter can be used separately:
The filter can be used to include and exclude. If you have entered words
into the exclude filter, this means that if the URL contains any of these
words, the corresponding files will not be downloaded. If you opt for the
include filter, this means that only the names containing the properties
specified in the word filter will be downloaded.
This is all you have to do for the main program settings.
When you exit the menu window you save by default the data you have entered and you can proceed to download websites.
Now we can start a project. The default properties you have entered will automatically be called up when you start a new project. These properties can be altered and saved for a later time for each separate project.
The term "project" therefore refers to the total number of options that define which site and properties are to be downloaded.
Introduction How the program works Creating a New Project Downloading a website Online / Offline Preview Additional requirements Interface setting
Screenshots
Copyright: InternetSoft Corporation, 2000 - 2006
http://www.offlinedownloader.comYou can find updated versions on our website at: http://www.offlinedownloader.com