javascript 主干 js 重磅页面的 SEO
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/10803218/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
SEO for a backbone js heavy page
提问by Saurav Shah
We use backbone heavily for rendering our pages. All the data is passed as json from the server and the html is created on the client with backbone and mustache. This poses a big problem for SEO. One way that I was planning to get around this was to detect if the request is from a bot and use something like HtmlUnit to render the page on the server and spit it out. Would love some alternate ideas. Also would like to know if there's a flaw in what I'm planning to do.
我们大量使用主干来渲染我们的页面。所有数据都作为 json 从服务器传递,html 是在客户端上创建的,带有主干和胡子。这给 SEO 带来了很大的问题。我计划解决这个问题的一种方法是检测请求是否来自机器人并使用 HtmlUnit 之类的东西在服务器上呈现页面并将其吐出。会喜欢一些替代的想法。也想知道我打算做的事情是否存在缺陷。
采纳答案by SimplGy
I don't necessarily like that the only option you have for answers are to redo everything to meet a broad best practice. There's good reason to consider doing things using an unobstrusive Javascript approach, but maybe there's a good reason you're doing this as a JS-required site. Let's pretend there is.
我不一定喜欢答案的唯一选择是重做一切以满足广泛的最佳实践。有充分的理由考虑使用不显眼的 Javascript 方法来做事,但也许有充分的理由将您作为需要 JS 的站点来做。让我们假装有。
If you're doing a Backbone.js application with dynamically filled in client templates, the best way I could think of to do this is in the link below. Basically, it amounts to telling a headless browser to run through a set of navigation commands to view all your users/products/pages and save a static html file at every step for SEO reasons.
如果您正在使用动态填充客户端模板的 Backbone.js 应用程序,我能想到的最佳方法是在下面的链接中。基本上,它相当于告诉无头浏览器运行一组导航命令以查看所有用户/产品/页面,并出于 SEO 原因在每一步保存静态 html 文件。
What's the least redundant way to make a site with JavaScript-generated HTML crawlable?
回答by Quentin
Build your site using Progressive Enhancementand Unobtrusive JavaScript.
使用渐进式增强和Unobtrusive JavaScript构建您的网站。
When you do significant Ajax stuff, use the history API.
当您执行重要的 Ajax 内容时,请使用历史 API。
Then you have real URLs for everything and Google won't be a problem.
然后你就有了所有内容的真实 URL,谷歌就不会成为问题。
回答by lecstor
In a project I'm working on at the moment I'm attempting to cover all the bases.. Backbone driven client, pushstate uris, bookmarkable pages, and html fallback where possible. The approach I've taken is to use Mustache for the templates, break them up into nice little components for my backbone views and make them available in a raw format to the client. When a page is requested the templates can be processed on the server to produce a full page and backbone attaches to the elements it wants to control.
在我目前正在处理的一个项目中,我试图涵盖所有基础......主干驱动客户端、pushstate uri、书签页面和 html 回退(如果可能)。我采取的方法是将 Mustache 用于模板,将它们分解为我的主干视图的漂亮小组件,并使它们以原始格式提供给客户端。当请求页面时,可以在服务器上处理模板以生成完整页面,并且主干附加到它想要控制的元素。
It's not a simple setup but so far I haven't hit any roadblocks and I haven't duplicated any templates. I've had to create a page wrapper template for each available url as Mustache doesn't do "wrappers" but I think I should be able to eliminate these with some extra coding on the server.
这不是一个简单的设置,但到目前为止我还没有遇到任何障碍,也没有复制任何模板。我不得不为每个可用的 url 创建一个页面包装模板,因为 Mustache 不做“包装”,但我认为我应该能够通过在服务器上进行一些额外的编码来消除这些。
The plan is to be able to have some components as pure js where required by the interface and some rendered by the server and enhanced with js where desired..
计划是能够将一些组件作为界面需要的纯 js 和一些由服务器呈现并在需要的地方使用 js 增强..
回答by hereandnow78
回答by Ben
There's pros and cons to using the google ajax crawling scheme - I used it for a social networking site (http://beta.playup.com) with mixed results...
使用 google ajax 抓取方案有利有弊 - 我将它用于社交网站 ( http://beta.playup.com),结果喜忧参半...
I wrote a gem to handle this transparently as rack middleware for ruby users (gem install google_ajax_crawler) (https://github.com/benkitzelman/google-ajax-crawler)
我写了一个 gem 来透明地处理这个作为 ruby 用户的机架中间件(gem install google_ajax_crawler)(https://github.com/benkitzelman/google-ajax-crawler)
Read about it at http://thecodeabode.blogspot.com.au/2013/03/backbonejs-and-seo-google-ajax-crawling.html
在http://thecodeabode.blogspot.com.au/2013/03/backbonejs-and-seo-google-ajax-crawling.html阅读它
The summary is that even though I successfully posted rendered dom snapshots to a requesting search engine, and I could see using webmaster tools that Google were crawling like 11,000 pages of the site, I found that Google were more prone to classify the apps various states (urls) as versions of the same page, and not as separate indexes. Try doing a search for beta.playup.com - only one index is listed even though the rendered content changes radically between urls )....
总结是,即使我成功地将渲染的 dom 快照发布到请求搜索引擎,并且我可以使用网站管理员工具看到谷歌正在抓取网站的 11,000 个页面,但我发现谷歌更倾向于将应用程序分类为各种状态( urls) 作为同一页面的版本,而不是作为单独的索引。尝试搜索 beta.playup.com - 即使渲染的内容在 url 之间发生了根本性的变化,也只列出了一个索引)....

