Αναζήτηση στην υποστήριξη

Προσοχή στις απάτες! Δεν θα σας ζητήσουμε ποτέ να καλέσετε ή να στείλετε μήνυμα σε κάποιον αριθμό τηλεφώνου ή να μοιραστείτε προσωπικά δεδομένα. Αναφέρετε τυχόν ύποπτη δραστηριότητα μέσω της επιλογής «Αναφορά κατάχρησης».

Μάθετε περισσότερα

remote web page calling local server

more options

Somehow, certain (remote) web sites have pages that are calling up LOCAL web server pages (which don't exist).

Scenario: I have installed and configured a httpd web server for the cacti monitoring tool. It serves no other function. Each night, I run logwatch. I am seeing 404 errors in the logwatch reports for my local web server such as the following: /AdServer/Pug?vcode=bz0yJnR5cGU9MSZjb2RlPT ... nasq0Fx5P7R7hyX: 1 Time(s)

I saw 70 or 80 or these from last night's report, but other reports show sometimes a lot more (hundreds) other times only a few or none. It probably depends how much I use the browser.

Since I am not running an adserver, and to the best of my knowledge, cacti does not access such a thing (or does it? I doubt it). This means that some page I downloaded from the WWW probably has some JS code that attempts to download pages such as the one I gave as an example.

Btw, just for comparison and maybe useful in pinpointing the source of this problem: Qupzilla does this also on a different system. For instance, an error in that system's logwatch report was trying to fetch a page from a web property owned by Hearst (I had been browsing a lot of news sites the previous day, so this makes sense). Apparently, that page was miscoded and tried to fetch it from MY local web server, not the appropriate web server at Hearst or its partners.

This may not be a huge problem in the cases I have actually encountered. But imagine someone malicious page accessing a local webmin page. Or a mysql web front end used by local database admins and you might get why I am concerned.

I am not sure which specific pages I was accessing that triggered these errors; that would take some peek and poke to figure out. If you have not encountered this issue previously, I can do so and provide that info.

Thank you for your support.

Somehow, certain (remote) web sites have pages that are calling up LOCAL web server pages (which don't exist). Scenario: I have installed and configured a httpd web server for the cacti monitoring tool. It serves no other function. Each night, I run logwatch. I am seeing 404 errors in the logwatch reports for my local web server such as the following: /AdServer/Pug?vcode=bz0yJnR5cGU9MSZjb2RlPT ... nasq0Fx5P7R7hyX: 1 Time(s) I saw 70 or 80 or these from last night's report, but other reports show sometimes a lot more (hundreds) other times only a few or none. It probably depends how much I use the browser. Since I am not running an adserver, and to the best of my knowledge, cacti does not access such a thing (or does it? I doubt it). This means that some page I downloaded from the WWW probably has some JS code that attempts to download pages such as the one I gave as an example. Btw, just for comparison and maybe useful in pinpointing the source of this problem: Qupzilla does this also on a different system. For instance, an error in that system's logwatch report was trying to fetch a page from a web property owned by Hearst (I had been browsing a lot of news sites the previous day, so this makes sense). Apparently, that page was miscoded and tried to fetch it from MY local web server, not the appropriate web server at Hearst or its partners. This may not be a huge problem in the cases I have actually encountered. But imagine someone malicious page accessing a local webmin page. Or a mysql web front end used by local database admins and you might get why I am concerned. I am not sure which specific pages I was accessing that triggered these errors; that would take some peek and poke to figure out. If you have not encountered this issue previously, I can do so and provide that info. Thank you for your support.

Επιλεγμένη λύση

Apparently, my dnsmasq server was misconfigured. It will take me some time to figure out exactly what is wrong, but if I disable it, I get correct ping behavior.

So this is probably not a problem with your or any browser. Forgive me and close this. Thank you.

Ανάγνωση απάντησης σε πλαίσιο 👍 0

Όλες οι απαντήσεις (7)

more options

When you say local server, do you mean on your own system? If you run that on port 80 or port 443, you could consider modifying that to see whether it makes any difference.

Some users assign the loopback address to advertising servers to their hosts file to "block" the requests. Probably you would remember if you did that, but omnibus security/privacy programs might do it without full disclosure. (I suspect that's a lot more likely on Windows.)

Sometimes I see a site were the developer accidentally left localhost in the code rather than updating all of the URLs for deployment. Perhaps that could cause this issue, although I would hope Firefox would discard the request as inappropriate...

Do your logs record any referring page information?

more options

I, too, would hope Firefox -- or any browser, for that matter -- would angrily flag such attempts, or perhaps error them to the syslog, or at least toss them.

I'm sure your test would tell us exactly what we are thinking. But no I did not purposely re-point the local server to the loopback. But the localhost entry is currently pointing to 127.0.0.1, and the hostname is pointing to 127.0.1.1; these were set when I installed the system. I do not see any referrer info, btw. Maybe there is a way to tell apache to log that info for further reference.

Whether a developer remembers to prepare their code for deployment or not, and whether or not a user assigns the loopback (or another) address for misdirected page requests, malicious or otherwise, should be obviated by every browser! From what you are saying, and I agree these steps should be taken to mitigate the problem (and are with most distros I think), it would be very easy to overlook.

My thinking is this. If a web page requires additional web resources, such as a javascript file, or photos, or whathaveyou, it should query its own originating server and get it there (if the resource is on a third party server somewhere, the originating server would handle it). But just in case it doesn't, browsers should be prepared with a wristslap response to such requests. There could be a potential for malware to do some serious damage and/or steal valuable or private information.

Is there any reason browsers could not simply dispose of these potentially risky access attempts? And thanks for your response.

more options

So here is an example. I changed the logging format to include referrer and restarted apache. Then in firefox, I went to

http://www.nytimes.com/2010/02/04/us/politics/04prayer.html

The access_log shows an access from the page above; the request in this case was for "/tag/js/gpt.js". I viewed the source and searched for this string and found:

function loadGPT() {

               if (!window.advBidxc.isAdServerLoaded) {
                   loadScript('//www.googletagservices.com/tag/js/gpt.js');
                   window.advBidxc.isAdServerLoaded = true;
               }
           }

Here is the loadScript routine (just above loadGPT, in fact):

function loadScript(tagSrc) {

               if (tagSrc.substr(0, 4) !== 'http') {
                   var isSSL = 'https:' == document.location.protocol;
                   tagSrc = (isSSL ? 'https:' : ) + tagSrc;

               }
               var scriptTag = document.createElement('script'),
                   placeTag = document.getElementsByTagName("script")[0];
               scriptTag.type = 'text/javascript';
               scriptTag.async = true;
               scriptTag.src = tagSrc;
               s.parentNode.insertBefore(scriptTag, s);
           }

the assignment for tagSrc is the smoking gun. Only if this is a https request will it prepend "https:" to the passed-in argument; otherwise, nothing is prepended. In this case, the page is not SSL, so no prepending, and the resulting string will be just as it is passed in. But those leading doubleslashes will be interpreted to be, what, a local file request? Anyway, it is passing it to the local server, not to Google to fetch some ad nor back to the NY Times.

So, a web page that is fetched from nytimes.com ends up querying my own httpd server. This is the bad coding I suspected; no surprises there. Again, I think Firefox should handle this with security breach in mind.

more options

This may be a better example, because it has no intervening javascript (even though the request is itself for some javascript, but it's irrelevant) so the browser will handle the fetch itself.

The page is http://www.sacbee.com/news/politics-government/capitol-alert/article195025409.html

The logfile shows that while loading the page, it is trying to fetch a remote file (looks like a request for ads). Tracking it down in the source, I see:

<script type="text/javascript" src="//ad.crwdcntrl.net/5/c=7436/pe=y/callback=extractPid"></script>

I would have to examine the mozilla source code to find out how this would be handled, but it looks like the same result as the prior example, and this is an instance of the browser itself handling the setup for the call rather than some code in the page itself.

I'll try to find more examples by going through the history of the pages I hit the previous day.

more options

Now upon further examination, I am noticing yet another problem. When I ping a domain like "googletagservices.com", I get "unknown host" for a response. But when I ping "www.googletagservices.com" or any other subdomain of that, I get a response from the local system. That is, it resolves to the loopback 127.0.0.1

So this looks like some kind of DNS configuration problem.

more options

Επιλεγμένη λύση

Apparently, my dnsmasq server was misconfigured. It will take me some time to figure out exactly what is wrong, but if I disable it, I get correct ping behavior.

So this is probably not a problem with your or any browser. Forgive me and close this. Thank you.

more options

I glad you got to the root of the problem.

For future reference, when Firefox encounters an address without a protocol --

//hostname/path/file

-- it uses the protocol of the current page. This style of path is intended to minimize problems with "mixed content".