Sunday 18 December 2011

Facebook likes spam trick explained

Facebook likes spam trick explained

Since a few months, more and more facebook users are liking fan pages or articles on blogs and sites without being aware of it.
Some spammers are hiding like buttons by changing the facebook css of the like button.
As a result some users are liking a fan page when they click on the play button of a video.
The best way to avoid to be tricked is to always disconnect from facebook when you're not using it.
 
Facebook should react by locking their CSS to avoid tricks like invisible like button, fake anti-spam color button, etc..
 
 
As you can see it's pretty easy to modify the Facebook CSS for the like button:
 
.fan_box a:hover{
  text-decoration: none;
}
.fan_box .full_widget{
  height: 200px;
  border: 0 !important;
  background: none !important;
  position: relative;
}
.fan_box .connect_top{
  background: none !important;
  padding: 0 !important;
}
.fan_box .profileimage, .fan_box .name_block{
  display: none;
}
.fan_box .connect_action{
  padding: 0 !important;
}
.fan_box .connections{
  padding: 0 !important;
  border: 0 !important;
  font-family: Arial, Helvetica, sans-serif;
  font-size: 11px;
  font-weight: bold;
  color: #666;
}
span.total{
  color: #FF6600;
  font-weight: bold;
}
.fan_box .connections .connections_grid {
  padding-top: 10px !important;
}
.fan_box .connections_grid .grid_item{
  padding: 0 10px 10px 0 !important;
}
.fan_box .connections_grid .grid_item .name{
  font-family: "lucida grande",tahoma,verdana,arial,sans-serif;
  font-weight: normal;
  color: #666 !important;
  padding-top: 1px !important;
}
.fan_box .connect_widget{
position: absolute;
bottom: 0;
right: 10px;
margin: 0 !important;
}
.fan_box .connect_widget .connect_widget_interactive_area {
margin: 0 !important;
}
.fan_box .connect_widget td.connect_widget_vertical_center {
padding: 0 !important;
}

Get free .info domain

Get free .info domain
Get Your Own .Info domain for free - how to get your own .info domain-free

For address you can use online UK business directory.
Follow Steps to GET .INFO DOMAIN FOR FREE

   1. Go to http://www.freeproxylists.com/uk.html
   2. Grab a UK proxy.
   3. Go to http://www.freeola.com
   4. Register there.
   5. Visit http://www.freeola.com/giveaway
   6. Click on the button that says “Login and choose my free domain“
   7. Choose your free .info domain and enter the information required. (Lookup fakenamegenerator or google for the address thing. )
   8. Wait for some hours. (It took around 30 hrs but it depends)
   9. You’ll get the mail in your mail box. Enjoy.

That’s it. So simple. Remember, the offer is only meant for UK citizens and one domain per household. That’s why we are using a UK proxy. If they find you suspicious, you won’t get it. But, I don’t think they’ll know that :) . No worry. And yes, you can change Name Servers as well.

Sunday 11 December 2011

How to make robots.txt

Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol

It works likes this: a robot wants to visits a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds: 

User-agent: *
Disallow: / 
The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
There are two important considerations when using /robots.txt:
  • robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
  • the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.
So don't try to use /robots.txt to hide information. 

How to create a /robots.txt file

Where to put it

The short answer: in the top-level directory of your web server.
The longer answer:
When a robot looks for the "/robots.txt" file for URL, it strips the path component from the URL (everything from the first single slash), and puts "/robots.txt" in its place.
For example, for "http://www.example.com/shop/index.html, it will remove the "/shop/index.html", and replace it with "/robots.txt", and will end up with "http://www.example.com/robots.txt". 

So, as a web site owner you need to put it in the right place on your web server for that resulting URL to work. Usually that is the same place where you put your web site's main "index.html" welcome page. Where exactly that is, and how to put the file there, depends on your web server software. 

Remember to use all lower case for the filename: "robots.txt", not "Robots.TXT.

What to put in it

The "/robots.txt" file is a text file, with one or more records. Usually contains a single record looking like this:
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /~joe/
In this example, three directories are excluded.
Note that you need a separate "Disallow" line for every URL prefix you want to exclude -- you cannot say "Disallow: /cgi-bin/ /tmp/" on a single line. Also, you may not have blank lines in a record, as they are used to delimit multiple records.
Note also that globbing and regular expression are not supported in either the User-agent or Disallow lines. The '*' in the User-agent field is a special value meaning "any robot". Specifically, you cannot have lines like "User-agent: *bot*", "Disallow: /tmp/*" or "Disallow: *.gif".
What you want to exclude depends on your server. Everything not explicitly disallowed is considered fair game to retrieve. Here follow some examples:
To exclude all robots from the entire server
User-agent: *
Disallow: / 
 
To allow all robots complete access
 
User-agent: *
Disallow:
(or just create an empty "/robots.txt" file, or don't use one at all)
To exclude all robots from part of the server
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/
To exclude a single robot
User-agent: BadBot
Disallow: /
To allow a single robot
User-agent: Google
Disallow:

User-agent: *
Disallow: /
To exclude all files except one
This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:
User-agent: *
Disallow: /~joe/stuff/
Alternatively you can explicitly disallow all disallowed pages:
User-agent: *
Disallow: /~joe/junk.html
Disallow: /~joe/foo.html
Disallow: /~joe/bar.html