C# crawler (NCrawler) - Need a specific configuration and extension
$15-20 USD
Cerrado
Publicado hace alrededor de 9 años
$15-20 USD
Pagado a la entrega
Hi,
We need to crawl our company's intranet website and extract all links with their link-name and url. We want to use <[login to view URL]> (LGPL license). We will process only HTML and everything is in UTF-8 charset.
Most features requests below are already integrated in libraries within NCrawler's latest source files.
**Th****e scope of the current project**:
1. a. Write a windows dialog on top of NCrawlers console, where indexing the links of a given URL can be started, stopped and resumed.
b. Stopping should also occur if for example internet connection breaks down or the program is closed.
c. Where the retry count of failed URLs can be specified, as well as link depth.
2, Where the found URLs and the link name are saved into a SQL Express database and the currently processed URL logged onto either the Console or a text-box (choice of the programmer).
**Target system**:
Our system has .NET 4 and Microsoft SQLExpress.
**Deliverables**: We need a working sample with clean code including all source files in C#, that is able to index [][1]<[login to view URL]> with a link-depth of 3 and that can resume again, when we disconnect internet connection and reconnect. All data should be stored in Ms SQLExpress. (Watch out for UTF-8).
----------------------------**
Information for the programmer to make your work easier:**
For stopping / resuming: Have a look at [login to view URL](false or true);
Regarding link-name extraction:[login to view URL] doc = new [login to view URL]();
[login to view URL]([login to view URL]);
You can use RegEx.
Have a good day and all the best,
Sina