We need this done as soon as possible.
aspx HTTP Proxy
We need you to write an aspx script to work as http proxy for any site. We are not specialists in this technology, so we can’t give you advice about how to approach this project. But we have strong knowledge of what we need, and have implementations in other languages. Please read carefully this document, and feel free to ask any questions.
I’ll start by showing you an example of how it should work.
1) We want to point [[login to view URL]][1] to [[login to view URL]][2]?
2) Then, we upload the script you will develop to [[login to view URL]][1]. Automatically, every HTTP request sent to [[login to view URL]][1] is sent to [[login to view URL]][2] including folders and files. So, if you did a query to [[login to view URL]][3] the server should query <[login to view URL]> and send back the response to the user. The same should work with [[login to view URL]][4] and [[login to view URL]][5]
Continued in word doc...
## Deliverables
aspx HTTP Proxy
We need you to write an aspx script to work as http proxy for any site. We are not specialists in this technology, so we can’t give you advice about how to approach this project. But we have strong knowledge of what we need, and have implementations in other languages. Please read carefully this document, and feel free to ask any questions.
I’ll start by showing you an example of how it should work.
1) We want to point [[login to view URL]][1] to [[login to view URL]][2]?
2) Then, we upload the script you will develop to [[login to view URL]][1]. Automatically, every HTTP request sent to [[login to view URL]][1] is sent to [[login to view URL]][2] including folders and files. So, if you did a query to [[login to view URL]][3] the server should query <[login to view URL]> and send back the response to the user. The same should work with [[login to view URL]][4] and [[login to view URL]][5]
Please note that all the links to [[login to view URL]][2] (css, js, jpg files) should be replaced with [[login to view URL]][1], so even if you take a look into [[login to view URL]][1] source code, you can’t tell that it’s pulling its contents from [[login to view URL]][2].
It’s extremely important to:
1. Be able to configure the site we’re pointing to.
2. Keep user’s session. Each user should feel like interacting directly with [[login to view URL]][2], being able to login, do queries and use any service that yahoo may provide ??" now or in the future. It’s important for it to work with javascript, flash, css, jpg, gif, php, jsp, html, htm, aspx, png, xml… and any other widely used extension you can think of, sending back the appropriate MIME types.
3. Populate POSTs and GETs.
We have addressed this issues by:
1. Using a config file, where we can set destination site. Just changing this to [[login to view URL]][6], should make the script to work with amazon.
2. Placing a cookie in the user’s browser, and have one file per each user in the server. When the user does a query, we search if exists a cookie file for him. If so, we used the stored cookie file to send requests to yahoo, also storing the cookies they may send in that file. This way, each user is isolated.
3. Simply posting back the GET / POST information we receive. Should work with file uploads too.
We will allow you to use LGPL software, but you must inform us about what the library/module does and provide a link to its website.