Minasan, Watashiwa Wawan Desu...

NurCell Movies

Monday, February 14, 2011

My weirdest referring keywords


Yeahhhh I have no idea what to blog tonight. But I happened to take a peek at my referring keywords report in Google Analytics. Here are the weirdest of the bunch:

Monster trucks. I know the post that draws the traffic, but how the hell do I still rank for this phrase?Snidely Whiplash. Again, I know the post. It's just weird, is all.Hobbits. I'm too tall for this.Lemming. 47 clicks in 30 days.Dr evil mini me. 31 clicks.Sarah Palin hot. God save me.Ass 18. God save all of us.Porn keywords. Actually, just put us all out of our misery.Nude scenes on the net. ...Ian carrot. Make it stop oh please what are these people thinking?Lederhosen wedgie. At this point, I stopped, afraid of what else I'd find.

Update: OK, I lied. The home run of weird was someone finding my site with this phrase: "i nominate tunafish. if that's a word then i'd like to order a chickenbird sandwich or a steakcow with a baked potato."


Seriously?


Another update: Now I can't stop. I'm finding stuff like "beware networt", "can a gun make your head explode", "how to tie an inchworm" and "donkeyf ---k". Bwah? There are more, but I can't type them without getting embarrassed.


Another another update (I'm going to be up all night I can tell): "marketing finger polish", "spastic squirrel"


My advice: Don't review your referring keyword list below #100 or so. You'll be up all freaking night giggling insanely.


I promise, tomorrow I'll have something more, er, useful.



View the original article here

Duplicate content sin #2: Default page linking


Last week I wrote about duplicate content sin #1 - screwy pagination. Today I'm going to explain a much simpler, but bigger problem: The inconsistent default page link.


When I say 'default page', I mean whatever page you'd first see if you navigated to a folder on a web site.


So the default page for Conversation Marketing (the whole site) can be found at www.conversationmarketing.com/. That's the root folder - the main folder housing my whole site.


The default page for all of this month's posts can be found at http://www.conversationmarketing.com/2010/10/. That's the sub-sub folder /10/, in the sub-folder 2010, in the root folder for www.conversationmarketing.com:


cm-folder-structure.gif


You can also find the default page for Conversation Marketing at http://www.conversationmarketing.com/index.htm. And you can find the default page for this month's posts at http://www.conversationmarketing.com/2010/10/index.htm.


Web servers automatically deliver these default pages when a visitor requests the folder - that's why you don't have to add 'index.htm' to these addresses.


The problems arise when a developer or designer links to default pages using different link styles at different times. For example, if your site has a 'home' link that points at '/index.htm' or 'default.aspx' or whatever your default page is, you've created duplication:

Search engines and most people see your home page as www.yoursite.com. Most other sites link to you there, too.But search engines crawling your site also see the link to www.yoursite.com/index.htm, and follow that link.To a search engine, the '/index.htm' page and the www.yoursite.com page are two unique pages with the exact same content.

Voila. Duplication.


The same thing happens if you inconsistently link to subfolders in your site.


I won't even waste time explaining what this does to your link profile. It's bad.


The problem here is duplication. And, as we know, duplicate content sucks.


If you want to avoid this kind of problem, apply Ian's Rule of Simplicity: Always use the shortest version of any default page's address. That version should typically be:


www.yourdomain.com + folders


No filenames.


Do that, and you'll eliminate one huge duplication problem. Best part is, most of your default page links will be in your navigation. If your site was built by a relatively sane person, you can make one change to your site template and fix a site-wide duplication issue. Woo hoo!


By the way, this is also considered a canonicalization problem. I'll never stop ranting about canonicalization - you know that, right?

I've been writing up a storm this week, so no fancy conclusions or funny animal pictures. Bye.



View the original article here

Duplicate content sin #1: Pagination


Earlier this week I wrote about why duplicate content sucks in SEO. I'm going to start mixing in tutorials/explanations of common ways folks end up duplicating content on their sites, too.


Today's topic: Pagination. It's oh-so-easy to generate duplicates with those little 1 2 3 4 >> at the bottom of the page.


Say you've got a site called www.blah.com. You've written an article that's 12 pages long, and added pagination at the bottom, like this:


Typical pagination


It's purty, and it works. When Google or Bing land on the page www.blah.com/articleaboutx/, they see the pagination and the page URL, and they get it. This page is page 1 of your article:



Nice.


Now, Googlebot crawls to page 2 of the article. That page is located at www.blah.com/articleaboutx/p2. Also no problem.


But when it attempts to crawl the '1' link, it sees a new URL: www.blah.com/articleaboutx/p1


That page has the same content as the first article page we saw at www.blah.com/articleaboutx/, because it is the first article page. But it's got a different URL.


google-gets-confused-pagination.gif


Two URLs, same page? Uh-oh. That's a duplication problem of the canonicalization variety.


If you have a large publication with, oh, 2000 articles, and all of those articles are paginated the way I described above, you've created 2000 duplicate pages on your site. And they happen to be the first page of every article - the most important page you've got.


Bloggers will link to the '/' or the '/p1' version randomly, depending on which URL they're viewing when they cut and paste.


Your caching software will have to cache both URLs.


And search engine crawlers will waste their time crawling all of those duplicates.


Blech. Luckily, this is an easy one to avoid.


This one's magical... it's tricky... wait for it...


Link the '1' in your pagination to the original URL for the first page of your article.


So, if your article's first page was at www.blah.com/articleaboutx/, make the '1' link point there, too. Don't point it at /p1.


Wow.


This sounds silly, I bet, but I have yet to see a publisher site, a designer blog, or any other site that paginates get it right the first time. If it's right, it's because a cranky SEO whined about it.


If you don't like the sound of me whining, go ahead and fix it now.


There you have it: One duplicate content problem fixed.



View the original article here

Noise: The state of internet marketing


How's internet marketing doing these days?


I give it a B-. An 80%. The other 20% got lost in the noise.


Internet marketing is growing up!


Marketers with multi-million dollar budgets no longer look at me like a cockroach when I walk into their boardrooms. My relatives no longer think I sell porn for a living. A lot of marketers actually think before they throw wads of money at the web.


And, we've become kind of - dare I say it? - respectable. There are some big, competent agencies out there. Blueglass is thumping around, snapping up talent, scaring the living crap out of boutiques like mine (and yes, that's a good thing). Big brands are carving out chunks of their marketing budgets for us. It's a far more mature market than it was ten years ago.


Alas, it's not all sunshine and rainbows. There are still ripoffs aplenty: The scammers still outnumber the real marketers. Agencies and 'professionals' make ridiculous claims and spread total misinformation.


Noise.


The worst part is, I think half of them don't know any better. The competence level in our industry is embarrassing.


By the way, I'm not suggesting that traditional marketing was much better. But internet marketing is younger. We can't afford a high yokel factor yet - it tarnishes the whole industry. And that hurts us.


Internet marketing - at least agency- and consultant-driven internet marketing - is under pressure from outside, too.


Many clients are trying to reduce costs by moving internet marketing in-house. They figure replacing a whole team with a single person has to pay off, cause it's, you know, cheaper.


At the same time, the affiliate game is getting harder. A single company - Google - controls more and more of the audience. And they've cracked down on the kind of arbitrage that makes affiliate marketing so attractive. I don't think that's permanent, and the good affiliates still do just fine. But the slowdown is driving a lot of lousy affiliates to sell their 'methods' to unsuspecting clients. That's another huge temptation for business owners looking for easy solutions. Why spend $10,000 on a consultant if you can buy Jimmy John Billy Bob's $200 Earn Money While You Sleep Plan?


Yep. More noise.


We can't keep growing and maturing an industry when the noise level drowns out the music. We have to squelch the noise. A few ideas:

Stop debating the clueless. Arguing with them just makes them look smarter. Don't let someone draw you into an argument about some theoretical 'method' for top rankings.Beat up the bullies. On the other hand, when the scammers use seedy promises to rip off clients, call them on it. Don't 'talk it over'. Don't 'teach the controversy'. Sock them in the mouth.Debate the smart folks. Clarify each other's points, like Jill Whelan did with me last week. That stuff's great, and it's invaluable. It raises the quality of information, and it keeps us sharp for the bullies.Get holistic. If you're an SEO, learn some conversion rate optimization. That way, you can help a client out when the traffic goes up, but sales don't. If you're a developer, learn a little SEO. It won't kill you.Educate. Don't just do stuff. Explain to clients why you're doing it. Even better, coach a fellow marketer who's looking to learn. You are not training a competitor. You are training a colleague.Show 'em the results. Never send a report to a client without showing what's worked, in terms that matter to them. Don't talk rankings, impressions and visitors. Talk leads, pipeline and sales.

There are lots of other ways, I'll bet, to grow and improve internet marketing as an industry. What are your ideas?



View the original article here

Python code to grab KeywordDiscovery API data


If you use the KeywordDiscovery API, and Python, my pain is your gain. It took me a few hours to get this to work. You can grab it and go. Here's the function, written in my usual Python Pigdin. I don't recommend using it without a passing knowledge of Python, but that's up to you:

def kwdiscovery(username,password,phraselist):base64string = base64.encodestring('%s:%s' % (username, password))[:-1]authheader = "Basic %s" % base64stringapiurl = "http://api.keyworddiscovery.com/queries.php?queries="separator = "%0D%0A"counter = 1for phrase in phraselist:# make sure there's no funny characterstry: phrase.decode('ascii')except UnicodeDecodeError:continuephrase = phrase.replace(" ","+")phrase = phrase.replace("\n","")if (counter > 1):apiurl = apiurl + separator + phraseelse:apiurl = apiurl + phrasecounter = counter + 1apiurl = apiurl + "&empty=1"req = urllib2.Request(apiurl)req.add_header("Authorization", authheader)blah = urllib2.urlopen(req)# because sometimes, things just go wrongtry:result = ET.parse(blah)resultlist = []lst = result.findall("r")for item in lst:this = item.attrib["q"],item.attrib["m"]resultlist.append(this)except:this = "__one of the words in this request caused an error:",apiurlresultlist = [this]return resultlist

And here's how you'd use the function:

#!/usr/bin/pythonimport stringimport sysimport httplibimport urllib2from urllib2 import Request, urlopen, URLErrorimport xml.etree.ElementTree as ETimport base64f = open('longw.txt','r')g = open('words_countedlongtail.txt','w')words = f.readlines()username = "ENTER KEYWORDDISCOVERY USERNAME HERE"password = "ENTER KEYWORDDISCOVERY PASSWORD HERE"start = 0count = len(words)while (count > 0):count = count - 9end = start + 9a = words[start:end]print "sent ",aresultlist = kwdiscovery(username,password,a)for l in resultlist:q = str(l[0])m = str(l[1])line = q + "\t" + m + "\n"g.write(line)print "received ",linestart = endf.close()g.close()

Who knows, I might even create a web interface one of these days. In my spare time.



View the original article here

Register for social media woot! Palm Springs by 11/4, get a free Apple TV


I've spent 3 days now in a vicodin-induced haze, trying to recover from a kidney stone. Yesterday at about noon, the vicodin stopped working and I spent 6 hours curled up in a ball.


Now I know why: The folks at Wappow! stole my vicodin and replaced it with sugar pills. It's the only possibility, since they're giving away an Apple TV to everyone who signs up for Social Media Woot! Palm Sprints before Thursday.


I don't make any money if you sign up - I have no financial axe to grind here - but the Woot! event in Hawaii was fantastic. if you want to hang out with some of the smartest people in the industry in a think tank-style environment where you get to ask tons of questions, the Woot! events are perfect.



View the original article here

SEO Tools I use


Yes, some of the links in this article are affiliate links, I might be evil and biased and out to rip you off, FCC requires me to say this, etc. blah blah.

I've been rejiggering my SEO toolbox lately: I used to focus on the 'cool' stuff - things I thought would impress clients, generate pretty reports, etc.. I've really switched to emphasize big labor-savers, instead - tools that are versatile, let me work with raw data and the like.


Here's what I'm using these days:


seomoz pro


When SEOMOZ launched their latest toolset, I nearly flung myself into the Green River. See, I've been slowly chipping away at an advanced toolset that would:

Crawl web sites automatically;Diagnose potential onsite SEO problems;Generate easy, readable reports and alerts based on the diagnosis.

Then SEOMOZ came out with their Pro toolset, which:

Crawls web sites automatically;Diagnoses potential onsite SEO problems;Generates easy, readable reports and alerts based on the diagnosis.

Sigh.


But, their tool rocks. It's super easy to use. Even cooler, it lets you download a crawl diagnostics file that you can filter through in Excel, generating your own reports. I'll be doing a video tutorial on that pretty soon.


And of course, when you sign up, you also get access to Linkscape, all their nifty content, and their Q&A service.


You can get a tour of the toolset here.


seobook.gif


Aaron's SEOBook is best known (I think) for training content and the forums.


He's also got a kick-ass toolset. Some of the tools are free, so you can give 'em a test run. There's SEO for Firefox, of course, and the SEO Toolbar. Plus the keyword suggestion tool.


The really cool stuff, though, is under the pay account. With membership you get access to a suite of great domain research tools, a competitive research tool and my favorite, the SEO Site Planner, which makes generating a keyword map a breeze.


Like SEOMOZ, a subscription is pricey, but it only has to save me 4-5 hours a month to pay for itself, and it more than takes care of that.


You can check out SEOBook here.


I've already written about this lovely command-line tool here. You can read up. It's geekery, but it's the Swiss Army Knife of search tools.


majestic seo


Another great link research tool, Majestic has something SEOMOZ's Linkscape doesn't have (yet): It shows link growth over time.


That's pretty important when clients start asking 'what have you done for me lately?'


You can also compare backlink histories between sites, check for other sites on your server, and generate pretty reports.


Check out Majestic SEO here.


If you're serious about really getting into the weeds in SEO, you probably need to learn Python, or Ruby, or PERL (if you enjoy punishment).


All of these languages will let you create a crawler and include some nifty libraries for web crawling, parsing HTML pages and other geekery.


Can you be a good SEO without learning any of them? Absolutely. But if you want to really understand how a crawler 'thinks', nothing beats building one. I use Python scripts now to test sites, track link building campaigns and apply Latent Dirichlet Allocation (LDA).


I'm a nerd, so I enjoy this stuff. If you're not, ignore this one and move on.



View the original article here