While phishing is a problem, giving one company the power to block any
site that it wishes at the browser level never seemed like a good idea
Actually, giving a single company this kind of authority is usually
not a bad idea. Spamhaus and email, for example.
The issue is about trust. Even with this goofup, I trust google (
although their response to this could change that ). Hell, I trust MS
here too, to a limited extent.
Yeah. While I reflexively rankle at the idea of blocking a whole
swathe of domains like that, it’s unfortunately clear that services
like dyndns and mine.nu are going to be overrun with phishers and
scammers because they’re just as convenient to them as they are to
non-malicious Internet users.
We need to educate users to check the URL before entering anything.
Any time you rely on a technological solution to a social problem you
end up with woes.
It’s just not going to happen. We like to think that “everyone” is
capable of understanding what is going on when they browse the web,
but that’s wishful thinking.
It will be a LONG time until you can ever hope that the general public
is as smart as the malicious few out there. Until then technology
solutions will continue to be needed, desired and our best bet in
combating this. Hell, they always will.
I don’t know anything about the FWT site; it may be fine. However, do
remember that just because a site is trustworthy over time doesn’t
mean it is trustworthy today, on this visit. I just had that driven
home for me the other day. In my off time, I am a youth soccer coach.
The website for our league has been fine for several years. Last week
I visited it and got the malware warning from FireFox. I checked with
the webmaster and sure enough, they had gotten hit with a SQL
injection attack and had indeed gotten malware of some sort hosted on
the site. So, FWT may be a false positive – but it is at leat possible
that they also got successfully attacked. We really don’t have a good
system to evaluate trust on the fly due to the dynamic nature of
internet content. A page that was fine 20 minutes ago may attack you
now.
Granted, I can see there are opportunities for abuse here, but if the
owners of dynamic dns domains don’t properly police their “customers”
and spammers and/or other malicious websites start using it, then
Google has every right to blacklist the entire domain. Of course, it’s
arguable exactly how much can be done to prevent it, but if you’re
really concerned about not getting your site blocked, go ahead and
blow the $7 a year on your own domain, or use a smaller ddns service
that can actually pay attention to the nature of the hosts it’s
serving.
As far as having any one third party responsible for maintaining a
blacklist, exactly how else do you intend to do it? You can always
create your own blacklist, but that would first require you to “enjoy”
the sites you would prefer get blocked automatically. You’ll just have
to trust someone to make that reasonable decision for you. Sure, there
will be some mistakes, but that’s the price you pay for protection.
Granted, I can see there are opportunities for abuse here, but if the
owners of dynamic dns domains don’t properly police their “customers”
and spammers and/or other malicious websites start using it, then
Google has every right to blacklist the entire domain.
Countries have been banned from sites, email, IRC channels and so on
with this argument. Just so you know, some ISPs have defacto
monopolies in their countries, and everyone there get the same domain.
Any idiot that say ‘let ban *.il, or *.es, because I got 10 spam
messages from there’ should be fired on the spot. In fact, if he works
at google whoever hired him should be fired, too.
I don’t get why you are getting annoyed that I (and probably many
others) do things like this?
In my mind giving this power to Google is the most objectionable thing
related to the company. I know somebody who has had his legitimate
business ruined because Google mistakenly added his site to this list.
Why? Because it was hosted on the same physical server as a truly
objectionable web site.
People need to stop childishly sneering at Windows users and take
their focus away from Microsoft. The terrible Goliath is clearly
Google now. Even when it’s not being evil it causes trouble just by
being *clumsy*.
The terrible Goliath is clearly Google now. Even when it’s not being
evil it causes trouble just by being *clumsy*.
No, Google doesn’t filter by IP address. But because the site was
hosted on the same server as a bad site it added a URL block for the
innocent too. Do you see?
Secondly, the issue isn’t about me using Firefox/Google. It’s about
customers who did and were told that the site they had browsed to was
malicious. The business lost a valuable customer this way and folded.
No, Google doesn’t filter by IP address. But because the site was
hosted on the same server as a bad site it added a URL block for the
innocent too. Do you see?
Doesn’t sound like a very professional business if it was using the
same domain that the bad site was on. Considering one can get a.com
for 6USD a year, there really is no excuse.
It’s about customers who did and were told that the site they had
browsed to was malicious. The business lost a valuable customer this
way and folded.
This company obviously wasn’t doing very well to begin with, or did
things properly to begin with either – This is not surprising.
You are not going to convince me that they couldn’t of done anything
to change the outcome, even when they became aware of the situation.
What I do find interesting is the fact you claim Google did this, when
the anti-phishing filter in the most popular browser, IE is ran by
Microsoft. The most popular search engine is Yahoo! – which does not
using any phishing data from Google.
I would assume the original AC is lying because Google’s practices on
filtering bad sites were disclosed long ago on [stopbadware.org]
This is the first time we’ve heard about Google (or any others) making
a bad block. As long as Google fixes this expediently, I’d say that
it’s an acceptable margin of error and the amount of phishing sites
blocked is by far worth it. Now, if wikileaks suddenly gets blocked
for ‘phishing’, something is definitely awry.
Any maintained blacklist of any reasonable size is going to end up
with false positives. It’s one of those things you just have to
accept. People notice and report it, the entry gets removed, and we
move on.
Putting anti-phishing filters into browsers just shifts the
responsibility of good security practices from the user to some
blacklisting company. What incentive is there to be weary about
suspicious sites if you can count on the almighty Google to hold your
hand while you browse the Web? This makes about as much sense as
someone installing parental controls in their machine and declaring
that their Internet connection is now “kid-friendly.”
I’ve never had these filters turned on, and I’ve never exposed my
financial data to others by accident. Usually this has something to do
with me hovering the mouse over links and checking the URL in the
status bar.
If you’re serious about blocking phishing sites, you have to accept
some collateral damage. Blocking by URL stopped working last year;
most attacks have unique URLs now. Many have unique subdomains. So you
have to block at the second-level domain level to be effective.
We publish a [ebay.com] Click on that URL. It says “ebay.com”, right?
It looks like eBay, right? It’s not.
On the other hand, “tinyurl.com”, which used to be popular with
phishers, has been able to get off the blacklist by cracking down on
misuse of their service. It’s possible to do redirection competently.
When we started our list last year, it had about 175 exploited
domains. After some serious nagging and an article in The Register,
we’re down to 46. And only 11 have been on the list for more than
three months; the others come and go as exploits are reported and
holes plugged. So this is a problem that can be solved.
I’m glad to see Google taking a hard line on this. It’s necessary that
sites that do redirection feel the pain when they accept redirects to
hostile sites. Google can apply much more pain that we can. Few sites
will want to be on Google’s blacklist for long.
This is something that strikes me as the first time Firefox really
pushed something out by default that shouldn’t be. Just for one
example, people who are on LTSP networks, say, 200 users, will ALL
download anti-phishing, anti-malware blacklists from Google, each in
their own home directory. There’s no way that I know of, anyway, to
share this data – SQLite seems to make it impossible. That’s the first
mistake in creating a compatible, light web browser.
The second mistake is enabling website blocking based on 3rd party
blacklists by default. This is basically Microsoft UI thinking – “You
*need* this because you don’t know any better.” Screw that. I mean,
make it a checkbox on setup – “Use Google-provided anti-malware
blacklists” Simple as that. I spent weeks trying to find out why,
after just a few Firefox instances were launched on an LTSP server,
none more would load – part of this was because every user logging in
was trying to download the anti-malware stuff from Google, saturating
the line, and preventing Firefox from loading for the first time.
I hope the Firefox devs will take all scenarios into account when
making changes. It seems lame that every user needs all of the stuff
in places.sqlite. And even if you argue with that, at the LEAST make
it cross-DB compatible, so you can put everyone’s in a nice big
central MySQL database.
The corollary of this is, of course, that you should still be wary of
single points of failure, even if you do not believe they will fail
you on purpose.
Shit happens. Yes, it sucks, but it happens. Now, should we try to
blow up the googleplex? No. Google are not blocking based on a secret
agenda here, and you can bypass it or turn off the feature. OK, it’d
be nice if you could choose who provides the service, but overall,
it’s not that big a deal.
Of the 4329 pages we tested on the site over the past 90 days, 0
page(s) resulted in malicious software being downloaded and installed
without user consent. The last time Google visited this site was on
09/21/2008, and suspicious content was never found on this site within
the past 90 days.
Malicious software includes 7523 scripting exploit(s), 2911 trojan(s).
Successful infection resulted in an average of 0 new processes on the
target machine.
Over the past 90 days, mine.nu/ appeared to function as an
intermediary for the infection of 183 site(s) including
culportal.info, mipt.ru, baikal-discovery.ru.
Yes, this site has hosted malicious software over the past 90 days. It
infected 932 domain(s), including bernard-becker.com, mipt.ru,
dhammasara.com.
In some cases, third parties can add malicious code to legitimate
sites, which would cause us to show the warning message.
* Return to the previous page. * If you are the owner of this web
site, you can request a review of your site using Google Webmaster
Tools. More information about the review process is available in
Google’s Webmaster Help Center.
Presumably if Google thinks some subdomains are malicious, they
actually know which ones are in fact malicious? Owing to the fact that
they found them in the first place? I’m wondering if the reason they
just blocked the entire domain was because some attackers are just
registering lots of subdomains as a fast-flux method.
Um, no. The list is supplied by Google. When Firefox blocks a site,
press the ‘Why was this site blocked?’ button to see Google’s warning
about it ( [google.com] in this case).