Let’s Dig into some Core SEO Concepts
seo

01

Apr

0

Comments

Now if we start off with a faulty foundation, it will be very easy for us to get discouraged from doing the right things or doing SEO at all on ours sites. That is why starting off right from the beginning is crucial to success. Try to think of your SEO efforts as a long term business plan. You may not see results today, but if you keep doing the right things, the results will come.

Now, let’s dig into some core SEO concepts.

The first thing that we must to do is avoid techniques that can get us in trouble with the search engines. These techniques are often referred to as black hat tactics and they can get you in quite a bit of trouble with the search engines. In fact, they can get you down right delisted. I like to think of black hat techniques as “temporary success with long term damage potential.”

The techniques that we are going to discuss are tried and proven to get you the best long term results. Black haters try to fool the search engines and the search engines do not like looking stupid to end users/searchers. So, it is best to avoid shady techniques if we want to stay in the search engines’ good graces. After all, the search engines are offering a free service, and they owe us nothing in return. So, if they decide to remove our site from their listings, there is no 1-800 number that we can use to quickly rectify the situation. Sure, you can ask Google to reconsider your site but they are in no hurry to add your site back into their index. They are plenty busy as it is and chances are that you don’t pay their bills so you will be a bit down the priority list.

How the search engines work – Intro

Search engines add content to their index and search engine results pages with the help of a “spider.” This spider is a program that is also known as a bot or web crawler. Thus, the name spider. These spiders help the search engines keep up with the ever changing web. A spider’s job is to strip out all of the pertinent data from a web page or website and then follow the links from that site to the next site. Once at the next site, the spider then repeats the process. Now, that poor little guy must get tired. By going form site to site, all of the web pages on the internet will be found, at least hypothetically. There are many different spiders that perform many types of functions online but our focus is on the spiders that crawl web pages. As you would imagine, search engines utilize more than just one spider to index the entire web and they also store this data on more than one data center or database. When a spider arrives at a web page, it enters the web page into a data center. Once the web page has been fetched, the text of the page is loaded into the search engine’s index (which is a large database of words).

 

 

NO COMMENTS

DARE TO LEAVE A REPLY?