• Beach Tweets

  • Archives

  • RSS PEER 1 Blog

    • An error has occurred; the feed is probably down. Try again later.

Servers That Lie

Mendacious machines controlled by hackers that reroute Internet traffic from infected computers to fraudulent Web sites are increasingly being used to launch attacks, according to a paper published this week by researchers with the Georgia Institute of Technology and Google Inc.

” The paper estimates roughly 68,000 servers on the Internet are returning malicious Domain Name System results, which means people with compromised computers are sometimes being directed to the wrong Web sites – and often have no idea.”

And often have no idea. That is what bothers me the most about users on the internet. Most harm done to users is because of of their own lack of knowledge as well as just not paying attention. There are ways to protect yourself, your systems and servers, it’s just taking the time out to learn, pay attention, and do it.

Read more about “servers that lie” here.

2 Responses

  1. While I agree that more internet users should be aware of what they are doing, where they are going, and what kind of information they are giving out, in this case it looks like the majority of users would be powerless. Once their computers are compromised in some form to request DNS information from an untrustworthy server, the end user would not be able to tell the difference because the computer would actually think the malicious server it’s talking to is the legitimate server, and there are no mechanisms to protect the user (unless the user start memorizing IP addresses). It’s just the crafty nature of the attack.
    With the technology getting smarter to alert the user to when they are going to an illegitimate site, the hackers have to think of craftier ways of getting around it.

    However, the point of my comment is that we as computer people can’t always be so quick to call PEBKAC and blame the user. In this case, while they may be able to do more to protect themselves from being compromised in the first place, once they are, there is no way for them to tell if they are going to a legit site or not.

  2. The big issue is plug ins.

    One of the biggest examples is a plug in that was written that allowed for IRI names on FireFox and Internet Explorer, before either supported IRI (Unicode URI’s).

    This took over all URL processing and http/https to a lesser extent (it could have taken over the entire schemes if it had wanted to).

    So you go “so what?”

    Well the folks writing it weren’t content at providing IRI support. They created a bunch of fake root domains, and added support there too. Why should you have to go through IANA and get a coordinated .com name, when you could get a fake address off of one of their fake root names?

    The users downloading the plug in for IRI’s may not (probably did not) know about this second use.

    Part of the problem is that barring TLS/SSL, users have only the URL to look at, and there’s no indication of:
    * who issued the URL (what registrar)
    * who the URL is issued to

    That information is typically not directly available as part of the browser UI.

Leave a comment