25 Oct Why static pages are bad

Clean, hand-coded HTML pages with short URL’s are a must in order for people as well as search engines to find and index your web pages, right? Wrong! This is another myth propagated by the SEO community, and I can prove it.
Do you think that Amazon or eBay hand-codes every page on their websites? Of course not. And their url strings are not pretty either. They usually contain variables that point to a record in their data page, as seen with a “?” and a “=” sign in the url. Yet both sites are among the mostly highly indexed sites on the Internet. So why do SEO companies insist that short, pretty url’s are the only way to get a page indexed? (Hint: they want your money.)
SEO “experts” call these cryptic or unfriendly URL’s and insist that search engines have problems with them. This isn’t true. Humans may have problems with long URL’s, but machines simply don’t care. Here is the word directly from Google:
The other thing that these same SEO “experts” will tell you is that clean HTML helps with search engine placement. In other words, having extra lines and proper indenting will help you. Really? This may help humans to read your code, but it does nothing for the search engines. Search engines employ software robots or “bots” to spider your web pages, not humans. And the software doesn’t care if your code looks pretty.  In fact, you may want to use an html scrambler program to make your code less readable by humans, just to keep prying eyes off your work and protect your source code. Try this: go to Amazon.com and select a product, then right click and do a “View Source” for that page. On the page I selected, the tag doesn’t start until line 70. Then there are multiple blank lines throughout the code, which is common with dynamically generated pages. The tag doesn’t start until line 3266. You see, the software that analyzes these pages just scans for content, it doesn’t grade the site based on whether you are following recommended coding practices.
Now software isn’t always as smart as humans are. I frequently run a number of diagnostic programs against various websites to insure that they are performing optimally, and many of these programs do in fact have problems with dynamically generated pages. Fortunately, those programs are not the same ones that Google uses.
I have a much smaller website than Amazon, but my dynamically generated site still has 342 pages indexed with Google. This is accomplished using inbound links to the site, of which there are 2,583. That puts my website in the top 11% of all web sites on the Internet. No, it isn’t Amazon, but that isn’t bad for a small company.
But here is the real down-side to static pages: The menu must be included as part of the page. If I want to add a new topic to my menu, that new topic would not show up on any of my previously created pages, unless I go back and re-build every one of those pages to include the new menu items. It is much more efficient to build the menu dynamically every time a page is retrieved than it is to keep it as part of some static source code, just because you think that helps with internal link building.
Static web pages are bad. Search engines can in fact find pages that are built dynamically. Dynamic content that is created from records in a database, even with long, ugly URL strings are much easier to maintain. If you need a short URL because of size limitations (such as for paid advertising campaigns that limit your URL to 32 characters), try using a service like TinyURL.com. Dynamic content that can be updated frequently wins every time. Just ask Amazon. Or eBay.
Now go out and build some inbound links to your site and stop trying to analyze your code – leave that to the robots.
No Comments

Sorry, the comment form is closed at this time.