I have a C#.net webform that does a simple response.write in content type json format. It works greats from every client I test it with - including a TinyWebDB API call from an Android phone.. but don't worry about that that API for this question.
I added some serverside code to that webform to read and scrape a web page as follows.
System.Net.WebClient myWebClient = new System.Net.WebClient();
Stream myStream = myWebClient.OpenRead(what);
StreamReader sr = new StreamReader(myStream);
string s = sr.ReadToEnd();
I test a call to my webform from IE, FF and Chrome.. all work great. However If I attempt to call the web form page from TinyWebDB the call works great and I get data back, but I get a 404 error on the server side read of the
It's almost as if System.Net.WebClient is requires something from or is doing something on client itself. i thought the reading of the page was all happening serverside and behind the scenes on my serer. Why would my serverside
code care about what browser or API initiated the call to the webform?
Should I be using another class?
WebClient.OpenRead use a GET method to do a request. What's the URI you called?
Maybe you can also Try HttpWebRequest class. This class is a general class to do a HTTP GET method.
string path = "http://www.*.com/*.aspx";
System.Net.HttpWebRequest webrequest = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create(path );
webrequest.Method = "GET";
webrequest.AllowAutoRedirect = true;
webrequest.Credentials = new NetworkCredential("username", "password"); //If you have a URI which requires authentication, you can give a NetworkCredential
System.Net.HttpWebResponse myresponse = (System.Net.HttpWebResponse)webrequest.GetResponse();
if (myresponse.StatusCode == System.Net.HttpStatusCode.OK)
using(Stream resStream = myresponse.GetResponseStream())
using (StreamReader reader = new StreamReader(resStream))
string readstring = reader.ReadToEnd();
catch (WebException ex)
if (ex.Response is HttpWebResponse)
Response.Write("page not found");
Hope this can help you.