-->
Please add a summary to this article.
Turn your Mac into an air traffic radar and see airplane traffic around the world move in real-time. Discover why millions are already using Flightradar24. We are frequently featured in the media and recent media mentions include Bild (Germany), Le Monde (France), Cable News Network (US), and MSNBC (US). ICaching can contain multiple home- or centerlocations and calculates distance and bearing from there for all geocaches on the fly. GCVote is build in. ICaching queries for GCVote on every import, and you can rate a cache on GCVote when you have a GCVote-account. Export. From iCaching I found that I can easily export a GPX file directly to the Garmin Edge 810. The Edge 810 will then display the geocaches as locations in the mapping section. Clicking on a geocache brings it up on the map ( Open Cycle Map in my case).
By Rick Anderson, John Luo, and Steve Smith
View or download sample code (how to download) Skoda octavia 1997 manual online.
Caching basics
Caching can significantly improve the performance and scalability of an app by reducing the work required to generate content. Caching works best with data that changes infrequently and is expensive to generate. Caching makes a copy of data that can be returned much faster than from the source. Apps should be written and tested to never depend on cached data.
ASP.NET Core supports several different caches. The simplest cache is based on the IMemoryCache.
IMemoryCache
represents a cache stored in the memory of the web server. Apps running on a server farm (multiple servers) should ensure sessions are sticky when using the in-memory cache. Sticky sessions ensure that subsequent requests from a client all go to the same server. For example, Azure Web apps use Application Request Routing (ARR) to route all subsequent requests to the same server.Non-sticky sessions in a web farm require a distributed cache to avoid cache consistency problems. For some apps, a distributed cache can support higher scale-out than an in-memory cache. Using a distributed cache offloads the cache memory to an external process.
The in-memory cache can store any object. The distributed cache interface is limited to
byte[]
. The in-memory and distributed cache store cache items as key-value pairs.System.Runtime.Caching/MemoryCache
System.Runtime.Caching/MemoryCache (NuGet package) can be used with:
- .NET Standard 2.0 or later.
- Any .NET implementation that targets .NET Standard 2.0 or later. For example, ASP.NET Core 2.0 or later.
- .NET Framework 4.5 or later.
Microsoft.Extensions.Caching.Memory/
IMemoryCache
(described in this article) is recommended over System.Runtime.Caching
/MemoryCache
because it's better integrated into ASP.NET Core. For example, IMemoryCache
works natively with ASP.NET Core dependency injection.Use
System.Runtime.Caching
/MemoryCache
as a compatibility bridge when porting code from ASP.NET 4.x to ASP.NET Core.Cache guidelines
- Code should always have a fallback option to fetch data and not depend on a cached value being available.
- The cache uses a scarce resource, memory. Limit cache growth:
- Do not use external input as cache keys.
- Use expirations to limit cache growth.
- Use SetSize, Size, and SizeLimit to limit cache size. The ASP.NET Core runtime does not limit cache size based on memory pressure. It's up to the developer to limit cache size.
Use IMemoryCache
Warning
The tax rate depends on your country tax rules, entered tax identification number (e.g. VAT ID ), and selected purchase method. Yearly billing Save 2 months. Monthly billing. Python IDE for professional developers. Pycharm paid version. PyCharm Community Edition. 2021.1 - Linux (tar.gz) 2021.1 - Windows (exe) 2021.1 - macOS (dmg) 2021.1 - macOS Apple Silicon (dmg) Version: 2021.1 ( Release notes) Build: 211.6693.115. Released: 6 April 2021. PyCharm Professional Edition third-party software. PyCharm Premium Version. If you are interested in the special features that you can avail by signing up for a premium plan then keep on reading! If you happen to look at the pricing for the premium plan, you may be in shock. The reason that premium plans are expensive is that they provide professional developers with specific tools that make. PyCharm targets professional Python developers from all knowledge-levels, primarily software developers, but also including data scientists. PyCharm Pricing Overview PyCharm pricing starts at $199.00 as a flat rate, per year. Get the Toolbox App to download PyCharm and its future updates with ease. PyCharm is also available as a snap package. If you’re on Ubuntu 16.04 or later, you can install PyCharm from the command line. Sudo snap install pycharm-professional pycharm-community -classic.
Using a shared memory cache from Dependency Injection and calling
SetSize
, Size
, or SizeLimit
to limit cache size can cause the app to fail. When a size limit is set on a cache, all entries must specify a size when being added. This can lead to issues since developers may not have full control on what uses the shared cache. For example, Entity Framework Core uses the shared cache and does not specify a size. If an app sets a cache size limit and uses EF Core, the app throws an InvalidOperationException
.When using SetSize
, Size
, or SizeLimit
to limit cache, create a cache singleton for caching. For more information and an example, see Use SetSize, Size, and SizeLimit to limit cache size.A shared cache is one shared by other frameworks or libraries. For example, EF Core uses the shared cache and does not specify a size.In-memory caching is a service that's referenced from an app using Dependency Injection. Request the
IMemoryCache
instance in the constructor:The following code uses TryGetValue to check if a time is in the cache. If a time isn't cached, a new entry is created and added to the cache with Set. The
CacheKeys
class is part of the download sample.The current time and the cached time are displayed:
Caching Sound Effect
The following code uses the Set extension method to cache data for a relative time without creating the
MemoryCacheEntryOptions
object.The cached
DateTime
value remains in the cache while there are requests within the timeout period.The following code uses GetOrCreate and GetOrCreateAsync to cache data.
The following code calls Get to fetch the cached time:
The following code gets or creates a cached item with absolute expiration:
A cached item set with a sliding expiration only is at risk of becoming stale. If it's accessed more frequently than the sliding expiration interval, the item will never expire. Combine a sliding expiration with an absolute expiration to guarantee that the item expires once its absolute expiration time passes. The absolute expiration sets an upper bound to how long the item can be cached while still allowing the item to expire earlier if it isn't requested within the sliding expiration interval. When both absolute and sliding expiration are specified, the expirations are logically ORed. If either the sliding expiration interval or the absolute expiration time pass, the item is evicted from the cache.
The following code gets or creates a cached item with both sliding and absolute expiration:
The preceding code guarantees the data will not be cached longer than the absolute time.
GetOrCreate, GetOrCreateAsync, and Get are extension methods in the CacheExtensions class. These methods extend the capability of IMemoryCache.
MemoryCacheEntryOptions
The following sample:
- Sets a sliding expiration time. Requests that access this cached item will reset the sliding expiration clock.
- Sets the cache priority to CacheItemPriority.NeverRemove.
- Sets a PostEvictionDelegate that will be called after the entry is evicted from the cache. The callback is run on a different thread from the code that removes the item from the cache.
Use SetSize, Size, and SizeLimit to limit cache size
A
MemoryCache
instance may optionally specify and enforce a size limit. The cache size limit does not have a defined unit of measure because the cache has no mechanism to measure the size of entries. If the cache size limit is set, all entries must specify size. The ASP.NET Core runtime does not limit cache size based on memory pressure. It's up to the developer to limit cache size. The size specified is in units the developer chooses.For example:
- If the web app was primarily caching strings, each cache entry size could be the string length.
- The app could specify the size of all entries as 1, and the size limit is the count of entries.
If SizeLimit isn't set, the cache grows without bound. The ASP.NET Core runtime doesn't trim the cache when system memory is low. Apps must be architected to:
- Limit cache growth.
- Call Compact or Remove when available memory is limited:
The following code creates a unitless fixed size MemoryCache accessible by dependency injection:
SizeLimit
does not have units. Cached entries must specify size in whatever units they deem most appropriate if the cache size limit has been set. All users of a cache instance should use the same unit system. An entry will not be cached if the sum of the cached entry sizes exceeds the value specified by SizeLimit
. If no cache size limit is set, the cache size set on the entry will be ignored.The following code registers
MyMemoryCache
with the dependency injection container.MyMemoryCache
is created as an independent memory cache for components that are aware of this size limited cache and know how to set cache entry size appropriately.The following code uses
MyMemoryCache
:The size of the cache entry can be set by Size or the SetSize extension methods:
MemoryCache.Compact
MemoryCache.Compact
attempts to remove the specified percentage of the cache in the following order:- All expired items.
- Items by priority. Lowest priority items are removed first.
- Least recently used objects.
- Items with the earliest absolute expiration.
- Items with the earliest sliding expiration.
Pinned items with priority NeverRemove are never removed. The following code removes a cache item and calls
Compact
:See Compact source on GitHub for more information.
Cache dependencies
The following sample shows how to expire a cache entry if a dependent entry expires. A CancellationChangeToken is added to the cached item. When
Cancel
is called on the CancellationTokenSource
, both cache entries are evicted.Using a CancellationTokenSource allows multiple cache entries to be evicted as a group. With the
using
pattern in the code above, cache entries created inside the using
block will inherit triggers and expiration settings.Icaching
Additional notes
- Expiration doesn't happen in the background. There is no timer that actively scans the cache for expired items. Any activity on the cache (
Get
,Set
,Remove
) can trigger a background scan for expired items. A timer on theCancellationTokenSource
(CancelAfter) also removes the entry and triggers a scan for expired items. The following example uses CancellationTokenSource(TimeSpan) for the registered token. When this token fires it removes the entry immediately and fires the eviction callbacks:
- Mac lady danger lip liner. When using a callback to repopulate a cache item:
- Multiple requests can find the cached key value empty because the callback hasn't completed.
- This can result in several threads repopulating the cached item.
- When one cache entry is used to create another, the child copies the parent entry's expiration tokens and time-based expiration settings. The child isn't expired by manual removal or updating of the parent entry.
- Use PostEvictionCallbacks to set the callbacks that will be fired after the cache entry is evicted from the cache.
- For most apps,
IMemoryCache
is enabled. For example, callingAddMvc
,AddControllersWithViews
,AddRazorPages
,AddMvcCore().AddRazorViewEngine
, and many otherAdd{Service}
methods inConfigureServices
, enablesIMemoryCache
. For apps that are not calling one of the precedingAdd{Service}
methods, it may be necessary to call AddMemoryCache inConfigureServices
.
Background cache update
Use a background service such as IHostedService to update the cache. The background service can recompute the entries and then assign them to the cache only when they’re ready.
Additional resources
By Rick Anderson, John Luo, and Steve Smith
View or download sample code (how to download)
Caching basics
Caching can significantly improve the performance and scalability of an app by reducing the work required to generate content. Caching works best with data that changes infrequently. Caching makes a copy of data that can be returned much faster than from the original source. Code should be written and tested to never depend on cached data.
ASP.NET Core supports several different caches. The simplest cache is based on the IMemoryCache, which represents a cache stored in the memory of the web server. Apps that run on a server farm (multiple servers) should ensure that sessions are sticky when using the in-memory cache. Sticky sessions ensure that later requests from a client all go to the same server. For example, Azure Web apps use Application Request Routing (ARR) to route all requests from a user agent to the same server.
Non-sticky sessions in a web farm require a distributed cache to avoid cache consistency problems. For some apps, a distributed cache can support higher scale-out than an in-memory cache. Using a distributed cache offloads the cache memory to an external process.
The in-memory cache can store any object. The distributed cache interface is limited to
byte[]
. The in-memory and distributed cache store cache items as key-value pairs.Caching_sha2_password
![Caching_sha2_password Caching_sha2_password](/uploads/1/1/8/9/118943515/800128462.png)
System.Runtime.Caching/MemoryCache
System.Runtime.Caching/MemoryCache (NuGet package) can be used with:
- .NET Standard 2.0 or later.
- Any .NET implementation that targets .NET Standard 2.0 or later. For example, ASP.NET Core 2.0 or later.
- .NET Framework 4.5 or later.
Microsoft.Extensions.Caching.Memory/
IMemoryCache
(described in this article) is recommended over System.Runtime.Caching
/MemoryCache
because it's better integrated into ASP.NET Core. For example, IMemoryCache
works natively with ASP.NET Core dependency injection.Use
System.Runtime.Caching
/MemoryCache
as a compatibility bridge when porting code from ASP.NET 4.x to ASP.NET Core.Cache guidelines
- Code should always have a fallback option to fetch data and not depend on a cached value being available.
- The cache uses a scarce resource, memory. Limit cache growth:
- Do not use external input as cache keys.
- Use expirations to limit cache growth.
- Use SetSize, Size, and SizeLimit to limit cache size. The ASP.NET Core runtime does not limit cache size based on memory pressure. It's up to the developer to limit cache size.
Using IMemoryCache
Warning
Using a shared memory cache from Dependency Injection and calling
SetSize
, Size
, or SizeLimit
to limit cache size can cause the app to fail. When a size limit is set on a cache, all entries must specify a size when being added. This can lead to issues since developers may not have full control on what uses the shared cache. For example, Entity Framework Core uses the shared cache and does not specify a size. If an app sets a cache size limit and uses EF Core, the app throws an InvalidOperationException
.When using SetSize
, Size
, or SizeLimit
to limit cache, create a cache singleton for caching. For more information and an example, see Use SetSize, Size, and SizeLimit to limit cache size.In-memory caching is a service that's referenced from your app using Dependency Injection. Call
AddMemoryCache
in ConfigureServices
:Request the
IMemoryCache
instance in the constructor:IMemoryCache
requires NuGet package Microsoft.Extensions.Caching.Memory, which is available in the Microsoft.AspNetCore.App metapackage.The following code uses TryGetValue to check if a time is in the cache. If a time isn't cached, a new entry is created and added to the cache with Set.
The current time and the cached time are displayed:
The cached
DateTime
value remains in the cache while there are requests within the timeout period. The following image shows the current time and an older time retrieved from the cache:The following code uses GetOrCreate and GetOrCreateAsync to cache data.
The following code calls Get to fetch the cached time:
GetOrCreate , GetOrCreateAsync, and Get are extension methods part of the CacheExtensions class that extends the capability of IMemoryCache. See IMemoryCache methods and CacheExtensions methods for a description of other cache methods.
MemoryCacheEntryOptions
The following sample:
- Sets a sliding expiration time. Requests that access this cached item will reset the sliding expiration clock.
- Sets the cache priority to
CacheItemPriority.NeverRemove
. - Sets a PostEvictionDelegate that will be called after the entry is evicted from the cache. The callback is run on a different thread from the code that removes the item from the cache.
Use SetSize, Size, and SizeLimit to limit cache size
A
MemoryCache
instance may optionally specify and enforce a size limit. The cache size limit does not have a defined unit of measure because the cache has no mechanism to measure the size of entries. If the cache size limit is set, all entries must specify size. The ASP.NET Core runtime does not limit cache size based on memory pressure. It's up to the developer to limit cache size. The size specified is in units the developer chooses.For example:
- If the web app was primarily caching strings, each cache entry size could be the string length.
- The app could specify the size of all entries as 1, and the size limit is the count of entries.
If SizeLimit is not set, the cache grows without bound. The ASP.NET Core runtime does not trim the cache when system memory is low. Apps much be architected to:
- Limit cache growth.
- Call Compact or Remove when available memory is limited:
The following code creates a unitless fixed size MemoryCache accessible by dependency injection:
SizeLimit
does not have units. Cached entries must specify size in whatever units they deem most appropriate if the cache size limit has been set. All users of a cache instance should use the same unit system. An entry will not be cached if the sum of the cached entry sizes exceeds the value specified by SizeLimit
. If no cache size limit is set, the cache size set on the entry will be ignored.The following code registers
MyMemoryCache
with the dependency injection container.MyMemoryCache
is created as an independent memory cache for components that are aware of this size limited cache and know how to set cache entry size appropriately.The following code uses
MyMemoryCache
:The size of the cache entry can be set by Size or the SetSize extension method:
MemoryCache.Compact
MemoryCache.Compact
attempts to remove the specified percentage of the cache in the following order:- All expired items.
- Items by priority. Lowest priority items are removed first.
- Least recently used objects.
- Items with the earliest absolute expiration.
- Items with the earliest sliding expiration.
Pinned items with priority NeverRemove are never removed.
See Compact source on GitHub for more information.
Cache dependencies
The following sample shows how to expire a cache entry if a dependent entry expires. A CancellationChangeToken is added to the cached item. When
Cancel
is called on the CancellationTokenSource
, both cache entries are evicted.Using a
CancellationTokenSource
allows multiple cache entries to be evicted as a group. With the using
pattern in the code above, cache entries created inside the using
block will inherit triggers and expiration settings.Additional notes
- When using a callback to repopulate a cache item:
- Multiple requests can find the cached key value empty because the callback hasn't completed.
- This can result in several threads repopulating the cached item.
- When one cache entry is used to create another, the child copies the parent entry's expiration tokens and time-based expiration settings. The child isn't expired by manual removal or updating of the parent entry.
- Use PostEvictionCallbacks to set the callbacks that will be fired after the cache entry is evicted from the cache.
Background cache update
Use a background service such as IHostedService to update the cache. The background service can recompute the entries and then assign them to the cache only when they’re ready.
Additional resources
netwerk/base/public/nsICachingChannel.idl
ScriptableInherits from:
nsICacheInfoChannel
Last changed in Gecko 2.0 (Firefox 4 / Thunderbird 3.3 / SeaMonkey 2.1)This interface provides:
- Support for 'stream as file' semantics (for JAR and plugins).
- Support for 'pinning' cached data in the cache (for printing and save-as).
- Support for uniquely identifying cached data in cases when the URL is insufficient. For example a HTTP form submission.
A channel may optionally implement this interface to allow clients to affect its behavior with respect to how it uses the cache service.
Method overview
Attributes
Attribute | Type | Description |
cacheAsFile | boolean | Specifies whether or not the data should be cached to a file. This may fail if the disk cache is not present. The value of this attribute is usually only settable during the processing of a channel's OnStartRequest . The default value of this attribute depends on the particular implementation of nsICachingChannel . |
cacheFile | nsIFile | Get the 'file' where the cached data can be found. This is valid for as long as a reference to the cache token is held. This may return an error if cacheAsFile is false . Read only. |
cacheForOfflineUse | boolean | Specifies whether or not the data should be placed in the offline cache, in addition to normal memory/disk caching. This may fail if the offline cache is not present. The value of this attribute should be set before opening the channel. |
cacheKey | nsISupports | Uniquely identifies the data in the cache for this channel. Holding a reference to this key does not prevent the cached data from being removed. A cache key retrieved from a particular instance of The cache key may be nsICachingChannel could be set on another instance of nsICachingChannel provided the underlying implementations are compatible and provided the new channel instance was created with the same URI. The implementation of nsICachingChannel would be expected to use the cache entry identified by the cache token. Depending on the value of nsIRequest.loadFlags() , the cache entry may be validated, overwritten, or simply read.null indicating that the URI of the channel is sufficient to locate the same cache entry. Setting a null cache key is likewise valid. |
cacheToken | nsISupports | Uniquely identifies the data in the cache. Holding a reference to this token prevents the cached data from being removed. A cache token retrieved from a particular instance of The cache token can be nsICachingChannel could be set on another instance of nsICachingChannel provided the underlying implementations are compatible. The implementation of nsICachingChannel would be expected to only read from the cache entry identified by the cache token and not try to validate it.QueryInterface 'd to a nsICacheEntryInfo if more detail about the cache entry is needed. For example, expiration time. |
offlineCacheClientID | ACString | The session into which to cache offline data. If not specified, data will be placed in 'HTTP-offline'. |
offlineCacheToken | nsISupports | The same as cacheToken but accessing the offline app cache token if there is any. Exceptions thrown
|
Constants
Constant | Value | Description |
LOAD_NO_NETWORK_IO | 1 << 26 | This load flag inhibits fetching from the net. An error of NS_ERROR_DOCUMENT_NOT_CACHED will be sent to the listener's onStopRequest if network IO is necessary to complete the request.This flag can be used to find out whether fetching this URL would cause validation of the cache entry via the network. Combining this flag with LOAD_BYPASS_LOCAL_CACHE will cause all loads to fail. This flag differs from LOAD_ONLY_FROM_CACHE in that this flag fails the load if validation is required while LOAD_ONLY_FROM_CACHE skips validation where possible. |
LOAD_CHECK_OFFLINE_CACHE | 1 << 27 | This load flag causes the offline cache to be checked when fetching a request. It will be set automatically if the browser is offline. This flag will not be transferred through a redirect. |
LOAD_BYPASS_LOCAL_CACHE | 1 << 28 | This load flag causes the local cache to be skipped when fetching a request. Unlike LOAD_BYPASS_CACHE , it does not force an end-to-end load (That is, it does not affect proxy caches). |
LOAD_BYPASS_LOCAL_CACHE_IF_BUSY | 1 << 29 | This load flag causes the local cache to be skipped if the request would otherwise block waiting to access the cache. |
LOAD_ONLY_FROM_CACHE | 1 << 30 | This load flag inhibits fetching from the net if the data in the cache has been evicted. An error of NS_ERROR_DOCUMENT_NOT_CACHED will be sent to the listener's onStopRequest in this case. This flag is set automatically when the application is offline. |
LOAD_ONLY_IF_MODIFIED | 1 << 31 | This load flag controls what happens when a document would be loaded from the cache to satisfy a call to If this flag has been set, and the request can be satisfied via the cache, then the AsyncOpen . If this attribute is set to true , then the document will not be loaded from the cache. A stream listener can check isFromCache() to determine if the AsyncOpen will actually result in data being streamed.OnDataAvailable events will be skipped. The listener will only see OnStartRequest followed by OnStopRequest . |
Methods
isFromCache()
Obsolete since Gecko 2.0 (Firefox 4 / Thunderbird 3.3 / SeaMonkey 2.1)This method finds out whether or not this channel's data is being loaded from the cache.
Parameters
None.
Return value
Returns
true
if this channel's data is being loaded from the cache, otherwise returns false
. This value is undefined before the channel fires its OnStartRequest
notification and after the channel fires its OnStopRequest
notification.