Looking to have an application that will do the following:
1. Extract data from tables from websites having approx 1,600 locations
2. The data will be extracted either in csv or excel files
3. These files to be compared with each other
4. Create a master excel file of all the unique files
5. Find-replace user defined strings to be run on this master file
6. Split the master file back to as many unique excel files
7. Automaticaly enter the duplicate files zip codes / store ids
## Deliverables
Finer details:
Hello,
We are looking forward to have an application that will do the following:
1. Extract data from tables from 10-12 stores' websites having approx 1,600 locations (based on zip codes / store location / store id). Tables contain product data like description, price, promotion offer, etc.
2. The data will be extracted either in csv or excel files (one file for each zip code / store location / store id)
3. These files to be compared with each other (within one store) and unique / duplicate files (based on cell contents) to be identified
4. Create a master excel file of all the unique files (within one store)
5. Find-replace user defined strings to be run on this master file with prefixed and suffixed wild cards to remove unneeded information in the strings
6. Split the master file back to as many unique excel files (based on zip codes / store id's)
7. Automaticaly enter the duplicate files zip codes / store ids on the just splitted unique files (split in step 6 above)
This may look complicated at first, but not so in reality. We are able to successfully accomplish this with the help of various tools, but now looking to integrate all these in one single application.
We would prefer .net, Java or Perl to create such an application. We use Win XP and MS Office on our systems.