.net 如何强制硬刷新 (ctrl+F5)?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/936626/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-03 12:47:53  来源:igfitidea点击:

How can I force a hard refresh (ctrl+F5)?

.netasp.net-mvchttpcaching

提问by Chris Conway

We are actively developing a website using .Net and MVC and our testers are having fits trying to get the latest stuff to test. Every time we modify the style sheet or external javascript files, testers need to do a hard refresh (ctrl+F5 in IE) in order to see the latest stuff.

我们正在使用 .Net 和 MVC 积极开发一个网站,我们的测试人员正在尝试获取最新的内容进行测试。每次我们修改样式表或外部 javascript 文件时,测试人员都需要进行硬刷新(IE 中为 ctrl+F5)才能看到最新的内容。

Is it possible for me to force their browsers to get the latest version of these files instead of them relying on their cached versions? We're not doing any kind of special caching from IIS or anything.

我是否可以强制他们的浏览器获取这些文件的最新版本,而不是依赖其缓存版本?我们不会从 IIS 或任何东西中进行任何类型的特殊缓存。

Once this goes into production, it will be hard to tell clients that they need to hard refresh in order to see the latest changes.

一旦投入生产,就很难告诉客户他们需要硬刷新才能看到最新的变化。

Thanks!

谢谢!

采纳答案by Serhat Ozgel

You need to modify the names of the external files you refer to. For e.g. add the build number at the end of each file, like style-1423.css and make the numbering a part of your build automation so that the files and the references are deployed with a unique name each time.

您需要修改您引用的外部文件的名称。例如,在每个文件的末尾添加构建编号,如 style-1423.css 并使编号成为构建自动化的一部分,以便每次部署文件和引用时都使用唯一的名称。

回答by Drew Noakes

I came up against this too and found what I consider to be a very satisfying solution.

我也遇到了这个问题,并找到了我认为非常令人满意的解决方案。

Note that using query parameters .../foo.js?v=1supposedly means that the file will apparently not be cached by some proxy servers. It's better to modify the path directly.

请注意,使用查询参数.../foo.js?v=1可能意味着该文件显然不会被某些代理服务器缓存。最好直接修改路径。

We need the browser to force a reload when the content changes. So, in the code I wrote, the path includes an MD5 hash of the file being referenced. If the file is republished to the web server but has the same content, then its URL is identical. What's more, it's safe to use an infinite expiry for caching too, as the content of that URL will never change.

我们需要浏览器在内容更改时强制重新加载。因此,在我编写的代码中,路径包含所引用文件的 MD5 哈希值。如果文件重新发布到 Web 服务器但具有相同的内容,则其 URL 是相同的。更重要的是,使用无限期缓存也是安全的,因为该 URL 的内容永远不会改变。

This hash is calculated at runtime (and cached in memory for performance), so there's no need to modify your build process. In fact, since adding this code to my site, I haven't had to give it much thought.

这个散列是在运行时计算的(并缓存在内存中以提高性能),所以不需要修改你的构建过程。事实上,自从将这段代码添加到我的网站后,我就不必多想了。

You can see it in action at this site: Dive Seven - Online Dive Logging for Scuba Divers

你可以在这个网站上看到它的实际效果:潜水七 - 水肺潜水员的在线潜水日志

In CSHTML/ASPX files

在 CSHTML/ASPX 文件中

<head>
  @Html.CssImportContent("~/Content/Styles/site.css");
  @Html.ScriptImportContent("~/Content/Styles/site.js");
</head>
<img src="@Url.ImageContent("~/Content/Images/site.png")" />

This generates markup resembling:

这会生成类似于以下内容的标记:

<head>
  <link rel="stylesheet" type="text/css"
        href="/c/e2b2c827e84b676fa90a8ae88702aa5c" />
  <script src="/c/240858026520292265e0834e5484b703"></script>
</head>
<img src="/c/4342b8790623f4bfeece676b8fe867a9" />

In Global.asax.cs

在 Global.asax.cs

We need to create a route to serve the content at this path:

我们需要创建一个路由来在这个路径上提供内容:

routes.MapRoute(
    "ContentHash",
    "c/{hash}",
    new { controller = "Content", action = "Get" },
    new { hash = @"^[0-9a-zA-Z]+$" } // constraint
    );

ContentController

内容控制器

This class is quite long. The crux of it is simple, but it turns out that you need to watch for changes to the file system in order to force recalculation of cached file hashes. I publish my site via FTP and, for example, the binfolder is replaced before the Contentfolder. Anyone (human or spider) that requests the site during that period will cause the old hash to be updated.

这堂课很长。它的关键很简单,但事实证明,您需要注意文件系统的更改,以强制重新计算缓存文件的哈希值。我通过 FTP 发布我的站点,例如,bin文件夹在Content文件夹之前被替换。在此期间请求该站点的任何人(人类或蜘蛛)都会导致旧哈希值更新。

The code looks much more complex than it is due to read/write locking.

由于读/写锁定,代码看起来要复杂得多。

public sealed class ContentController : Controller
{
    #region Hash calculation, caching and invalidation on file change

    private static readonly Dictionary<string, string> _hashByContentUrl = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
    private static readonly Dictionary<string, ContentData> _dataByHash = new Dictionary<string, ContentData>(StringComparer.Ordinal);
    private static readonly ReaderWriterLockSlim _lock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
    private static readonly object _watcherLock = new object();
    private static FileSystemWatcher _watcher;

    internal static string ContentHashUrl(string contentUrl, string contentType, HttpContextBase httpContext, UrlHelper urlHelper)
    {
        EnsureWatching(httpContext);

        _lock.EnterUpgradeableReadLock();
        try
        {
            string hash;
            if (!_hashByContentUrl.TryGetValue(contentUrl, out hash))
            {
                var contentPath = httpContext.Server.MapPath(contentUrl);

                // Calculate and combine the hash of both file content and path
                byte[] contentHash;
                byte[] urlHash;
                using (var hashAlgorithm = MD5.Create())
                {
                    using (var fileStream = System.IO.File.Open(contentPath, FileMode.Open, FileAccess.Read, FileShare.Read))
                        contentHash = hashAlgorithm.ComputeHash(fileStream);
                    urlHash = hashAlgorithm.ComputeHash(Encoding.ASCII.GetBytes(contentPath));
                }
                var sb = new StringBuilder(32);
                for (var i = 0; i < contentHash.Length; i++)
                    sb.Append((contentHash[i] ^ urlHash[i]).ToString("x2"));
                hash = sb.ToString();

                _lock.EnterWriteLock();
                try
                {
                    _hashByContentUrl[contentUrl] = hash;
                    _dataByHash[hash] = new ContentData { ContentUrl = contentUrl, ContentType = contentType };
                }
                finally
                {
                    _lock.ExitWriteLock();
                }
            }

            return urlHelper.Action("Get", "Content", new { hash });
        }
        finally
        {
            _lock.ExitUpgradeableReadLock();
        }
    }

    private static void EnsureWatching(HttpContextBase httpContext)
    {
        if (_watcher != null)
            return;

        lock (_watcherLock)
        {
            if (_watcher != null)
                return;

            var contentRoot = httpContext.Server.MapPath("/");
            _watcher = new FileSystemWatcher(contentRoot) { IncludeSubdirectories = true, EnableRaisingEvents = true };
            var handler = (FileSystemEventHandler)delegate(object sender, FileSystemEventArgs e)
            {
                // TODO would be nice to have an inverse function to MapPath.  does it exist?
                var changedContentUrl = "~" + e.FullPath.Substring(contentRoot.Length - 1).Replace("\", "/");
                _lock.EnterWriteLock();
                try
                {
                    // if there is a stored hash for the file that changed, remove it
                    string oldHash;
                    if (_hashByContentUrl.TryGetValue(changedContentUrl, out oldHash))
                    {
                        _dataByHash.Remove(oldHash);
                        _hashByContentUrl.Remove(changedContentUrl);
                    }
                }
                finally
                {
                    _lock.ExitWriteLock();
                }
            };
            _watcher.Changed += handler;
            _watcher.Deleted += handler;
        }
    }

    private sealed class ContentData
    {
        public string ContentUrl { get; set; }
        public string ContentType { get; set; }
    }

    #endregion

    public ActionResult Get(string hash)
    {
        _lock.EnterReadLock();
        try
        {
            // set a very long expiry time
            Response.Cache.SetExpires(DateTime.Now.AddYears(1));
            Response.Cache.SetCacheability(HttpCacheability.Public);

            // look up the resource that this hash applies to and serve it
            ContentData data;
            if (_dataByHash.TryGetValue(hash, out data))
                return new FilePathResult(data.ContentUrl, data.ContentType);

            // TODO replace this with however you handle 404 errors on your site
            throw new Exception("Resource not found.");
        }
        finally
        {
            _lock.ExitReadLock();
        }
    }
}

Helper Methods

辅助方法

You can remove the attributes if you don't use ReSharper.

如果您不使用 ReSharper,则可以删除这些属性。

public static class ContentHelpers
{
    [Pure]
    public static MvcHtmlString ScriptImportContent(this HtmlHelper htmlHelper, [NotNull, PathReference] string contentPath, [CanBeNull, PathReference] string minimisedContentPath = null)
    {
        if (contentPath == null)
            throw new ArgumentNullException("contentPath");
#if DEBUG
        var path = contentPath;
#else
        var path = minimisedContentPath ?? contentPath;
#endif

        var url = ContentController.ContentHashUrl(contentPath, "text/javascript", htmlHelper.ViewContext.HttpContext, new UrlHelper(htmlHelper.ViewContext.RequestContext));
        return new MvcHtmlString(string.Format(@"<script src=""{0}""></script>", url));
    }

    [Pure]
    public static MvcHtmlString CssImportContent(this HtmlHelper htmlHelper, [NotNull, PathReference] string contentPath)
    {
        // TODO optional 'media' param? as enum?
        if (contentPath == null)
            throw new ArgumentNullException("contentPath");

        var url = ContentController.ContentHashUrl(contentPath, "text/css", htmlHelper.ViewContext.HttpContext, new UrlHelper(htmlHelper.ViewContext.RequestContext));
        return new MvcHtmlString(String.Format(@"<link rel=""stylesheet"" type=""text/css"" href=""{0}"" />", url));
    }

    [Pure]
    public static string ImageContent(this UrlHelper urlHelper, [NotNull, PathReference] string contentPath)
    {
        if (contentPath == null)
            throw new ArgumentNullException("contentPath");
        string mime;
        if (contentPath.EndsWith(".png", StringComparison.OrdinalIgnoreCase))
            mime = "image/png";
        else if (contentPath.EndsWith(".jpg", StringComparison.OrdinalIgnoreCase) || contentPath.EndsWith(".jpeg", StringComparison.OrdinalIgnoreCase))
            mime = "image/jpeg";
        else if (contentPath.EndsWith(".gif", StringComparison.OrdinalIgnoreCase))
            mime = "image/gif";
        else
            throw new NotSupportedException("Unexpected image extension.  Please add code to support it: " + contentPath);
        return ContentController.ContentHashUrl(contentPath, mime, urlHelper.RequestContext.HttpContext, urlHelper);
    }
}

Feedback appreciated!

反馈赞赏!

回答by RedFilter

Rather than a build number or random number, append the last-modified date of the file to the URL as querystring programmatically. This will prevent any accidents where you forget to modify the querystring manually, and will allow the browser to cache the file when it has not changed.

以编程方式将文件的最后修改日期作为查询字符串附加到 URL,而不是内部版本号或随机数。这将防止您忘记手动修改查询字符串的任何意外,并允许浏览器在文件未更改时缓存文件。

Example output could look like this:

示例输出可能如下所示:

<script src="../../Scripts/site.js?v=20090503114351" type="text/javascript"></script>

回答by Chad Ruppert

Since you mention only your testers complaining, Have you considered having them turn off their local browser cache, so that it checks every time for new content? It will slow their browsers a touch... but unless you are doing usability testing every time, this is probably a whole lot easier than postfixing the filename, adding a querystring param, or modifying the headers.

由于您只提到您的测试人员抱怨,您是否考虑过让他们关闭本地浏览器缓存,以便每次检查新内容?它会减慢浏览器的速度……但是除非您每次都进行可用性测试,否则这可能比后缀文件名、添加查询字符串参数或修改标题容易得多。

This works in 90% of the cases in our test environments.

这适用于我们测试环境中 90% 的情况。

回答by Erick

What you might do is to call your JS file with a random string each time the page refresh. This way you are sure it's always fresh.

您可能会做的是每次刷新页面时使用随机字符串调用您的 JS 文件。这样你就可以确保它总是新鲜的。

You just need to call it this way "/path/to/your/file.js?<random-number>"

你只需要调用它这样“/path/to/your/file.js?<随机数>

Example: jquery-min-1.2.6.js?234266

示例:jquery-min-1.2.6.js?234266

回答by DSO

In your references to CSS and Javascript files, append a version query string. Bump it everytime you update the file. This will be ignored by the web site, but web browsers will treat it as a new resource and re-load it.

在您对 CSS 和 Javascript 文件的引用中,附加一个版本查询字符串。每次更新文件时都要碰撞它。这将被网站忽略,但网络浏览器会将其视为新资源并重新加载。

For example:

例如:

<link href="../../Themes/Plain/style.css?v=1" rel="stylesheet" type="text/css" />
<script src="../../Scripts/site.js?v=1" type="text/javascript"></script>

回答by Ozzy

you could edit the http headers of the files to force the browsers to revalidate on each request

您可以编辑文件的 http 标头以强制浏览器重新验证每个请求