Online predators recently chose to prey on UT accounts, IT officials
say.University e-mail inboxes have suffered a recent influx of
phishing e-mails, which bait users into replying with personal
information, said Bob Hogle, director of information security. “E-mail
addresses are sheered from the black cat community,” Hogle said. “It’s
the underground of spammers, spyware and all the attackers.”These new
phishing attempts from the “black cat” community were disguised as
e-mails from UT’s support team, requesting usernames and passwords
from students and faculty, Hogle said.”People really need to
understand that IT will never ask for any personal information,” he
said. The university’s spam filters catch phishing attempts everyday,
Hogle said. “UT receives millions of e-mails a day, and 98 percent of
that is spam,” he said. As soon as the IT department sees the
attachments coming through, it quarantines them. “We intercepted the
reply addresses to redirect the info back to UT,” said Godfrey
Ovwigho, vice president for information technology.The exact number of
people who were affected is unclear, but no personal information of
students and faculty was exposed due to the redirecting of the reply
addresses, Ovwigho said. Hogle said it’s unclear how the phishers were
able to obtain UT e-mail addresses.”That’s something we’re trying to
ascertain,” he said. “They harvest them in all different ways, and how
they got our addresses, I’m sure we don’t know for certain.””Every
once in a while, the ‘black cat’ community finds a way to morph,”
Hogle said. “Spam filters then build new technologies to combat
them.”Despite the new techniques developed by the “black cat”
community, the filter at UT updates itself daily and catches 98 to 99
percent of the spam attempts, Ovwigho said. In a mass e-mail to the UT
community, Tom Phillips, e-mail systems architect, announced the IT
department has chosen a new product from IronPort Systems to replace
UT’s existing anti-spam solution, SonicWall. After testing the new
product for several weeks, IT determined IronPort was effective in
combating over “99 percent of 6 million spam e-mail messages per day,
” Phillips said in the e-mail announcement, adding that IT is working
hard to have the new solution online by Sept. 23.These phishing
attempts are widespread, Hogle said.”Every institution deals with
this,” Ovwigho said. “It’s not exclusively UT that deals with this.
It’s common all over the place.”
Posts Tagged ‘phishers’
The phishing filter ut hogle e-mail
October 1, 2008The phishing scams birckelbaw students e-mails
October 1, 2008The “foreign dignitary” contacting students via e-mail may not be who
he says he is, and probably does not have millions of dollars to give
away. College campuses are prime targets for phishing e-mails, and
Illinois State is no exception.Carla Birckelbaw, director of Computer
Infrastructure and Support Services at Illinois State, explained that
many e-mails have been sent out claiming to be from “The ILSTU Team.”
These phishing e-mails ask students to supply their password in order
to verify their account, letting the hackers into the system.”These
attacks are directed at all of ISU, anyone who has an e-mail account,”
Birckelbaw explained. “They are smart and they demonstrate that they
know about the university in order to gain your trust.” Though she did
not know the exact number of students who had succumbed to these
predatory schemes, Birckelbaw commented that enough students were
sending in their passwords to warrant a serious response from the
school.”Preventing problems like this is a major focus of [Computer
Infrastructure and Support Services] security and education,”
Birckelbaw said. “In fact, this is our number one priority right now.”
Erin Shaw, a sophomore graphic design major, is glad that there are
people on-campus working to prevent “phishers” from being
successful.”I hate getting those e-mails because they look so
convincing,” Shaw said. “I’m glad that someone at ISU cares to inform
us about them so that we don’t give in.” Birckelbaw offered a few
suggestions to help students recognize when they are being scammed.”On
the iCampus home page is an alert feed that will notify students of
any recent suspicious activity. The latest information is posted
immediately,” Birckelbaw said. “If you do receive something
suspicious, report it immediately. We can detect these
scammers.””Never give out your password, not even to large
corporations like eBay or PayPal,” Birckelbaw said. “They will never
ask for your password. Legitimate companies cannot ask for that
information.”
The phishing per cent threats
October 1, 2008Hyderabad, Sept. 25 Outsourcing is not just confined to software or
hardware companies, but is also a business model that exists within
the hacking groups with specialised language for communicating.
“Phishers and spammers have supply chain managements too and
there are clear prices for the services that each one provides and all
these are beginning to show up now,” Mr Shantanu Ghosh, Vice-
President, India Product Operations, Symantec Software India Pvt Ltd,
said which speaking on the Emerging Security Threats and Trends here.
According to him, malicious activity nowadays is targeted at end-users
and not at computers like in the past.
“This has made the underground economy more mature with flexible
models and for a new person entering this area he no longer needs to a
specialist but can buy the services,” he added.
Rapid adaptation
A study done by the company has also said that these
‘criminals’ have rapid adaptation to security business as
it is like an online business.
For example, the cost of bank account information ranges any where
between $10 and $1,000 and accounts for 22 per cent of the underground
economy.
Next comes credit card information which costs $0.40 – $20 accounting
for 13 per cent. Other information sold include e-mail addresses,
e-mail passwords, eBay accounts etc.
He also informed that the malicious activity today has become
primarily Web-based and there has been a significant increase in site
specific vulnerabilities.
New threats
The company’s research also pointed out that in the 2007-second
half , 4,99,811 new malicious code threats were found.
“This is a 136 per cent increase over the previous period when
2,12,101 threats were detected and 571 per cent increase over the
second half of 2006,” he said.
Mr Ghosh said that around 80 per cent of mail flowing around is spam
as against 8 per cent in 2001 and the spammers are using innovative
ways to beat blocking technologies.
“Phishing and other end user exploits are on the rise. India
currently has around five to six million users who access social
networks and this is the other major area that is vulnerable,”
he said.
He added that India features amongst the top 10 countries from where
spam originates and earlier this year, Indian banks faced a six-fold
increase in phishing attacks.
Stories in this Section 9.16 m new mobile subscribers IFC loan: A shot
in the arm for Idea Cellular TN Govt preparing policy on managing
e-waste 3G, WiMax auctions to be held simultaneously IT cos may not
dispute Lehman’s ‘cure amounts’ Oracle upbeat about
demand from SMEs Atom Tech to offer banking via subsidiary Saksoft
scouts for US-based testing co ‘Hacking groups also use the
outsourcing model’ Healthcare unaffected by US meltdown Sybase
Soft appointment
The phishing google site it’s
October 1, 2008While phishing is a problem, giving one company the power to block any
site that it wishes at the browser level never seemed like a good idea
Actually, giving a single company this kind of authority is usually
not a bad idea. Spamhaus and email, for example.
The issue is about trust. Even with this goofup, I trust google (
although their response to this could change that ). Hell, I trust MS
here too, to a limited extent.
Yeah. While I reflexively rankle at the idea of blocking a whole
swathe of domains like that, it’s unfortunately clear that services
like dyndns and mine.nu are going to be overrun with phishers and
scammers because they’re just as convenient to them as they are to
non-malicious Internet users.
We need to educate users to check the URL before entering anything.
Any time you rely on a technological solution to a social problem you
end up with woes.
It’s just not going to happen. We like to think that “everyone” is
capable of understanding what is going on when they browse the web,
but that’s wishful thinking.
It will be a LONG time until you can ever hope that the general public
is as smart as the malicious few out there. Until then technology
solutions will continue to be needed, desired and our best bet in
combating this. Hell, they always will.
I don’t know anything about the FWT site; it may be fine. However, do
remember that just because a site is trustworthy over time doesn’t
mean it is trustworthy today, on this visit. I just had that driven
home for me the other day. In my off time, I am a youth soccer coach.
The website for our league has been fine for several years. Last week
I visited it and got the malware warning from FireFox. I checked with
the webmaster and sure enough, they had gotten hit with a SQL
injection attack and had indeed gotten malware of some sort hosted on
the site. So, FWT may be a false positive – but it is at leat possible
that they also got successfully attacked. We really don’t have a good
system to evaluate trust on the fly due to the dynamic nature of
internet content. A page that was fine 20 minutes ago may attack you
now.
Granted, I can see there are opportunities for abuse here, but if the
owners of dynamic dns domains don’t properly police their “customers”
and spammers and/or other malicious websites start using it, then
Google has every right to blacklist the entire domain. Of course, it’s
arguable exactly how much can be done to prevent it, but if you’re
really concerned about not getting your site blocked, go ahead and
blow the $7 a year on your own domain, or use a smaller ddns service
that can actually pay attention to the nature of the hosts it’s
serving.
As far as having any one third party responsible for maintaining a
blacklist, exactly how else do you intend to do it? You can always
create your own blacklist, but that would first require you to “enjoy”
the sites you would prefer get blocked automatically. You’ll just have
to trust someone to make that reasonable decision for you. Sure, there
will be some mistakes, but that’s the price you pay for protection.
Granted, I can see there are opportunities for abuse here, but if the
owners of dynamic dns domains don’t properly police their “customers”
and spammers and/or other malicious websites start using it, then
Google has every right to blacklist the entire domain.
Countries have been banned from sites, email, IRC channels and so on
with this argument. Just so you know, some ISPs have defacto
monopolies in their countries, and everyone there get the same domain.
Any idiot that say ‘let ban *.il, or *.es, because I got 10 spam
messages from there’ should be fired on the spot. In fact, if he works
at google whoever hired him should be fired, too.
I don’t get why you are getting annoyed that I (and probably many
others) do things like this?
In my mind giving this power to Google is the most objectionable thing
related to the company. I know somebody who has had his legitimate
business ruined because Google mistakenly added his site to this list.
Why? Because it was hosted on the same physical server as a truly
objectionable web site.
People need to stop childishly sneering at Windows users and take
their focus away from Microsoft. The terrible Goliath is clearly
Google now. Even when it’s not being evil it causes trouble just by
being *clumsy*.
The terrible Goliath is clearly Google now. Even when it’s not being
evil it causes trouble just by being *clumsy*.
No, Google doesn’t filter by IP address. But because the site was
hosted on the same server as a bad site it added a URL block for the
innocent too. Do you see?
Secondly, the issue isn’t about me using Firefox/Google. It’s about
customers who did and were told that the site they had browsed to was
malicious. The business lost a valuable customer this way and folded.
No, Google doesn’t filter by IP address. But because the site was
hosted on the same server as a bad site it added a URL block for the
innocent too. Do you see?
Doesn’t sound like a very professional business if it was using the
same domain that the bad site was on. Considering one can get a.com
for 6USD a year, there really is no excuse.
It’s about customers who did and were told that the site they had
browsed to was malicious. The business lost a valuable customer this
way and folded.
This company obviously wasn’t doing very well to begin with, or did
things properly to begin with either – This is not surprising.
You are not going to convince me that they couldn’t of done anything
to change the outcome, even when they became aware of the situation.
What I do find interesting is the fact you claim Google did this, when
the anti-phishing filter in the most popular browser, IE is ran by
Microsoft. The most popular search engine is Yahoo! – which does not
using any phishing data from Google.
I would assume the original AC is lying because Google’s practices on
filtering bad sites were disclosed long ago on [stopbadware.org]
This is the first time we’ve heard about Google (or any others) making
a bad block. As long as Google fixes this expediently, I’d say that
it’s an acceptable margin of error and the amount of phishing sites
blocked is by far worth it. Now, if wikileaks suddenly gets blocked
for ‘phishing’, something is definitely awry.
Any maintained blacklist of any reasonable size is going to end up
with false positives. It’s one of those things you just have to
accept. People notice and report it, the entry gets removed, and we
move on.
Putting anti-phishing filters into browsers just shifts the
responsibility of good security practices from the user to some
blacklisting company. What incentive is there to be weary about
suspicious sites if you can count on the almighty Google to hold your
hand while you browse the Web? This makes about as much sense as
someone installing parental controls in their machine and declaring
that their Internet connection is now “kid-friendly.”
I’ve never had these filters turned on, and I’ve never exposed my
financial data to others by accident. Usually this has something to do
with me hovering the mouse over links and checking the URL in the
status bar.
If you’re serious about blocking phishing sites, you have to accept
some collateral damage. Blocking by URL stopped working last year;
most attacks have unique URLs now. Many have unique subdomains. So you
have to block at the second-level domain level to be effective.
We publish a [ebay.com] Click on that URL. It says “ebay.com”, right?
It looks like eBay, right? It’s not.
On the other hand, “tinyurl.com”, which used to be popular with
phishers, has been able to get off the blacklist by cracking down on
misuse of their service. It’s possible to do redirection competently.
When we started our list last year, it had about 175 exploited
domains. After some serious nagging and an article in The Register,
we’re down to 46. And only 11 have been on the list for more than
three months; the others come and go as exploits are reported and
holes plugged. So this is a problem that can be solved.
I’m glad to see Google taking a hard line on this. It’s necessary that
sites that do redirection feel the pain when they accept redirects to
hostile sites. Google can apply much more pain that we can. Few sites
will want to be on Google’s blacklist for long.
This is something that strikes me as the first time Firefox really
pushed something out by default that shouldn’t be. Just for one
example, people who are on LTSP networks, say, 200 users, will ALL
download anti-phishing, anti-malware blacklists from Google, each in
their own home directory. There’s no way that I know of, anyway, to
share this data – SQLite seems to make it impossible. That’s the first
mistake in creating a compatible, light web browser.
The second mistake is enabling website blocking based on 3rd party
blacklists by default. This is basically Microsoft UI thinking – “You
*need* this because you don’t know any better.” Screw that. I mean,
make it a checkbox on setup – “Use Google-provided anti-malware
blacklists” Simple as that. I spent weeks trying to find out why,
after just a few Firefox instances were launched on an LTSP server,
none more would load – part of this was because every user logging in
was trying to download the anti-malware stuff from Google, saturating
the line, and preventing Firefox from loading for the first time.
I hope the Firefox devs will take all scenarios into account when
making changes. It seems lame that every user needs all of the stuff
in places.sqlite. And even if you argue with that, at the LEAST make
it cross-DB compatible, so you can put everyone’s in a nice big
central MySQL database.
The corollary of this is, of course, that you should still be wary of
single points of failure, even if you do not believe they will fail
you on purpose.
Shit happens. Yes, it sucks, but it happens. Now, should we try to
blow up the googleplex? No. Google are not blocking based on a secret
agenda here, and you can bypass it or turn off the feature. OK, it’d
be nice if you could choose who provides the service, but overall,
it’s not that big a deal.
Of the 4329 pages we tested on the site over the past 90 days, 0
page(s) resulted in malicious software being downloaded and installed
without user consent. The last time Google visited this site was on
09/21/2008, and suspicious content was never found on this site within
the past 90 days.
Malicious software includes 7523 scripting exploit(s), 2911 trojan(s).
Successful infection resulted in an average of 0 new processes on the
target machine.
Over the past 90 days, mine.nu/ appeared to function as an
intermediary for the infection of 183 site(s) including
culportal.info, mipt.ru, baikal-discovery.ru.
Yes, this site has hosted malicious software over the past 90 days. It
infected 932 domain(s), including bernard-becker.com, mipt.ru,
dhammasara.com.
In some cases, third parties can add malicious code to legitimate
sites, which would cause us to show the warning message.
* Return to the previous page. * If you are the owner of this web
site, you can request a review of your site using Google Webmaster
Tools. More information about the review process is available in
Google’s Webmaster Help Center.
Presumably if Google thinks some subdomains are malicious, they
actually know which ones are in fact malicious? Owing to the fact that
they found them in the first place? I’m wondering if the reason they
just blocked the entire domain was because some attackers are just
registering lots of subdomains as a fast-flux method.
Um, no. The list is supplied by Google. When Firefox blocks a site,
press the ‘Why was this site blocked?’ button to see Google’s warning
about it ( [google.com] in this case).