当前位置:  开发笔记 > 编程语言 > 正文

您将如何在Java中实现LRU缓存?

如何解决《您将如何在Java中实现LRU缓存?》经验,为你挑选了8个好方法。

请不要说EHCache或OSCache等.为了这个问题的目的,假设我只想使用SDK(从实践中学习)来实现我自己的.鉴于缓存将在多线程环境中使用,您将使用哪些数据结构?我已经使用LinkedHashMap和Collections#synchronizedMap实现了一个,但我很好奇任何新的并发集合是否会更好.

更新:当我发现这个金块时,我只是阅读Yegge的最新消息:

如果您需要持续时间访问并希望维护插入顺序,那么您不能比LinkedHashMap做得更好,这是一个真正精彩的数据结构.它可能更精彩的唯一方法是如果有并发版本.可惜.

在我使用上面提到的LinkedHashMap+ Collections#synchronizedMap实现之前,我的想法几乎完全相同.很高兴知道我不只是忽略了一些东西.

基于到目前为止的答案,对于高度并发的LRU来说,我最好的选择是使用一些相同的逻辑来扩展ConcurrentHashMapLinkedHashMap.



1> Hank Gay..:

我喜欢很多这些建议,但现在我觉得我会坚持LinkedHashMap+ Collections.synchronizedMap.如果我将来重新考虑这个,我可能会ConcurrentHashMap以同样的方式LinkedHashMap扩展HashMap.

更新:

根据要求,这是我当前实施的要点.

private class LruCache extends LinkedHashMap {
    private final int maxEntries;

    public LruCache(final int maxEntries) {
        super(maxEntries + 1, 1.0f, true);
        this.maxEntries = maxEntries;
    }

    /**
     * Returns true if this LruCache has more entries than the maximum specified when it was
     * created.
     *
     * 

* This method does not modify the underlying Map; it relies on the implementation of * LinkedHashMap to do that, but that behavior is documented in the JavaDoc for * LinkedHashMap. *

* * @param eldest * the Entry in question; this implementation doesn't care what it is, since the * implementation is only dependent on the size of the cache * @return true if the oldest * @see java.util.LinkedHashMap#removeEldestEntry(Map.Entry) */ @Override protected boolean removeEldestEntry(final Map.Entry eldest) { return super.size() > maxEntries; } } Map example = Collections.synchronizedMap(new LruCache(CACHE_SIZE));


但是我想在这里使用封装而不是继承.这是我从Effective Java中学到的东西.
@KapilD已经有一段时间了,但我几乎肯定`LinkedHashMap`的JavaDocs明确支持这种创建LRU实现的方法.
@HankGay Java的LinkedHashMap(第三个参数= true)不是LRU缓存.这是因为重新放入条目不会影响条目的顺序(真正的LRU缓存会将最后插入的条目放在迭代顺序的后面,而不管该条目最初是否存在于缓存中)
@Pacerier"重新输入一个条目不会影响条目的顺序",这是不正确的.如果你看一下LinkedHashMap的实现,对于"put"方法,它继承了HashMap的实现.HashMap的Javadoc说:"如果地图以前包含密钥的映射,则替换旧值".如果你查看它的源代码,当替换旧值时,它将调用recordAccess方法,并且在LinkedHashMap的recordAccess方法中,它看起来像这样:if(lm.accessOrder){lm.modCount ++; 去掉(); addBefore(lm.header);}
@Pacerier我根本没有看到这种行为.启用accessOrder映射后,所有操作都会创建最近使用的条目(最新):初始插入,值更新和值检索.我错过了什么吗?
@nybon我知道缓存的最大大小,为什么我会想要任何加载因子*其他*而不是`1.0f`?这样做的目的是准确地进行一个分配(初始分配),并且当必须增长以保持新元素时,避免重新散列"Map"的现有内容.

2> Hank Gay..:

如果我今天再次从头开始这样做,我会使用番石榴CacheBuilder.


而Java 8重写... [咖啡因](https://github.com/ben-manes/caffeine/wiki)

3> RickHigh..:

This is round two.

The first round was what I came up with then I reread the comments with the domain a bit more ingrained in my head.

So here is the simplest version with a unit test that shows it works based on some other versions.

First the non-concurrent version:

import java.util.LinkedHashMap;
import java.util.Map;

public class LruSimpleCache implements LruCache {

    Map map = new LinkedHashMap (  );


    public LruSimpleCache (final int limit) {
           map = new LinkedHashMap  (16, 0.75f, true) {
               @Override
               protected boolean removeEldestEntry(final Map.Entry eldest) {
                   return super.size() > limit;
               }
           };
    }
    @Override
    public void put ( K key, V value ) {
        map.put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map.get(key);
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        V value =  map.get ( key );
        if (value!=null) {
            map.remove ( key );
            map.put(key, value);
        }
        return value;
    }

    @Override
    public void remove ( K key ) {
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }


}

The true flag will track the access of gets and puts. See JavaDocs. The removeEdelstEntry without the true flag to the constructor would just implement a FIFO cache (see notes below on FIFO and removeEldestEntry).

Here is the test that proves it works as an LRU cache:

public class LruSimpleTest {

    @Test
    public void test () {
        LruCache  cache = new LruSimpleCache<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        if ( !ok ) die ();

    }

Now for the concurrent version...

package org.boon.cache;

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LruSimpleConcurrentCache implements LruCache {

    final CacheMap[] cacheRegions;


    private static class CacheMap extends LinkedHashMap {
        private final ReadWriteLock readWriteLock;
        private final int limit;

        CacheMap ( final int limit, boolean fair ) {
            super ( 16, 0.75f, true );
            this.limit = limit;
            readWriteLock = new ReentrantReadWriteLock ( fair );

        }

        protected boolean removeEldestEntry ( final Map.Entry eldest ) {
            return super.size () > limit;
        }


        @Override
        public V put ( K key, V value ) {
            readWriteLock.writeLock ().lock ();

            V old;
            try {

                old = super.put ( key, value );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return old;

        }


        @Override
        public V get ( Object key ) {
            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.get ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;
        }

        @Override
        public V remove ( Object key ) {

            readWriteLock.writeLock ().lock ();
            V value;

            try {

                value = super.remove ( key );
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public V getSilent ( K key ) {
            readWriteLock.writeLock ().lock ();

            V value;

            try {

                value = this.get ( key );
                if ( value != null ) {
                    this.remove ( key );
                    this.put ( key, value );
                }
            } finally {
                readWriteLock.writeLock ().unlock ();
            }
            return value;

        }

        public int size () {
            readWriteLock.readLock ().lock ();
            int size = -1;
            try {
                size = super.size ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return size;
        }

        public String toString () {
            readWriteLock.readLock ().lock ();
            String str;
            try {
                str = super.toString ();
            } finally {
                readWriteLock.readLock ().unlock ();
            }
            return str;
        }


    }

    public LruSimpleConcurrentCache ( final int limit, boolean fair ) {
        int cores = Runtime.getRuntime ().availableProcessors ();
        int stripeSize = cores < 2 ? 4 : cores * 2;
        cacheRegions = new CacheMap[ stripeSize ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    public LruSimpleConcurrentCache ( final int concurrency, final int limit, boolean fair ) {

        cacheRegions = new CacheMap[ concurrency ];
        for ( int index = 0; index < cacheRegions.length; index++ ) {
            cacheRegions[ index ] = new CacheMap<> ( limit / cacheRegions.length, fair );
        }
    }

    private int stripeIndex ( K key ) {
        int hashCode = key.hashCode () * 31;
        return hashCode % ( cacheRegions.length );
    }

    private CacheMap map ( K key ) {
        return cacheRegions[ stripeIndex ( key ) ];
    }

    @Override
    public void put ( K key, V value ) {

        map ( key ).put ( key, value );
    }

    @Override
    public V get ( K key ) {
        return map ( key ).get ( key );
    }

    //For testing only
    @Override
    public V getSilent ( K key ) {
        return map ( key ).getSilent ( key );

    }

    @Override
    public void remove ( K key ) {
        map ( key ).remove ( key );
    }

    @Override
    public int size () {
        int size = 0;
        for ( CacheMap cache : cacheRegions ) {
            size += cache.size ();
        }
        return size;
    }

    public String toString () {

        StringBuilder builder = new StringBuilder ();
        for ( CacheMap cache : cacheRegions ) {
            builder.append ( cache.toString () ).append ( '\n' );
        }

        return builder.toString ();
    }


}

You can see why I cover the non-concurrent version first. The above attempts to create some stripes to reduce lock contention. So we it hashes the key and then looks up that hash to find the actual cache. This makes the limit size more of a suggestion/rough guess within a fair amount of error depending on how well spread your keys hash algorithm is.

Here is the test to show that the concurrent version probably works. :) (Test under fire would be the real way).

public class SimpleConcurrentLRUCache {


    @Test
    public void test () {
        LruCache  cache = new LruSimpleConcurrentCache<> ( 1, 4, false );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );

        puts (cache);
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();


        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();

        cache.put ( 8, 8 );
        cache.put ( 9, 9 );

        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();


        puts (cache);


        if ( !ok ) die ();

    }


    @Test
    public void test2 () {
        LruCache  cache = new LruSimpleConcurrentCache<> ( 400, false );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        for (int index =0 ; index < 5_000; index++) {
            cache.get(0);
            cache.get ( 1 );
            cache.put ( 2, index  );
            cache.put ( 3, index );
            cache.put(index, index);
        }

        boolean ok = cache.getSilent ( 0 ) == 0 || die ();
        ok |= cache.getSilent ( 1 ) == 1 || die ();
        ok |= cache.getSilent ( 2 ) != null || die ();
        ok |= cache.getSilent ( 3 ) != null || die ();

        ok |= cache.size () < 600 || die();
        if ( !ok ) die ();



    }

}

This is the last post.. The first post I deleted as it was a LFU not an LRU cache.

I thought I would give this another go. I was trying trying to come up with the simplest version of an LRU cache using the standard JDK w/o too much implementation.

Here is what I came up with. My first attempt was a bit of a disaster as I implemented a LFU instead of and LRU, and then I added FIFO, and LRU support to it... and then I realized it was becoming a monster. Then I started talking to my buddy John who was barely interested, and then I described at deep length how I implemented an LFU, LRU and FIFO and how you could switch it with a simple ENUM arg, and then I realized that all I really wanted was a simple LRU. So ignore the earlier post from me, and let me know if you want to see an LRU/LFU/FIFO cache that is switchable via an enum... no? Ok.. here he go.

The simplest possible LRU using just the JDK. I implemented both a concurrent version and a non-concurrent version.

I created a common interface (it is minimalism so likely missing a few features that you would like but it works for my use cases, but let if you would like to see feature XYZ let me know... I live to write code.).

public interface LruCache {
    void put ( KEY key, VALUE value );

    VALUE get ( KEY key );

    VALUE getSilent ( KEY key );

    void remove ( KEY key );

    int size ();
}

You may wonder what getSilent is. I use this for testing. getSilent does not change LRU score of an item.

First the non-concurrent one....

import java.util.Deque;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;

public class LruCacheNormal implements LruCache {

    Map map = new HashMap<> ();
    Deque queue = new LinkedList<> ();
    final int limit;


    public LruCacheNormal ( int limit ) {
        this.limit = limit;
    }

    public void put ( KEY key, VALUE value ) {
        VALUE oldValue = map.put ( key, value );

        /*If there was already an object under this key,
         then remove it before adding to queue
         Frequently used keys will be at the top so the search could be fast.
         */
        if ( oldValue != null ) {
            queue.removeFirstOccurrence ( key );
        }
        queue.addFirst ( key );

        if ( map.size () > limit ) {
            final KEY removedKey = queue.removeLast ();
            map.remove ( removedKey );
        }

    }


    public VALUE get ( KEY key ) {

        /* Frequently used keys will be at the top so the search could be fast.*/
        queue.removeFirstOccurrence ( key );
        queue.addFirst ( key );
        return map.get ( key );
    }


    public VALUE getSilent ( KEY key ) {

        return map.get ( key );
    }

    public void remove ( KEY key ) {

        /* Frequently used keys will be at the top so the search could be fast.*/
        queue.removeFirstOccurrence ( key );
        map.remove ( key );
    }

    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }
}

The queue.removeFirstOccurrence is a potentially expensive operation if you have a large cache. One could take LinkedList as an example and add a reverse lookup hash map from element to node to make remove operations A LOT FASTER and more consistent. I started too, but then realized I don't need it. But... maybe...

When put is called, the key gets added to the queue. When get is called, the key gets removed and re-added to the top of the queue.

If your cache is small and the building an item is expensive then this should be a good cache. If your cache is really large, then the linear search could be a bottle neck especially if you don't have hot areas of cache. The more intense the hot spots, the faster the linear search as hot items are always at the top of the linear search. Anyway... what is needed for this to go faster is write another LinkedList that has a remove operation that has reverse element to node lookup for remove, then removing would be about as fast as removing a key from a hash map.

If you have a cache under 1,000 items, this should work out fine.

这是一个简单的测试,以显示其运作.

public class LruCacheTest {

    @Test
    public void test () {
        LruCache cache = new LruCacheNormal<> ( 4 );


        cache.put ( 0, 0 );
        cache.put ( 1, 1 );

        cache.put ( 2, 2 );
        cache.put ( 3, 3 );


        boolean ok = cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 0 ) == 0 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();


        cache.put ( 4, 4 );
        cache.put ( 5, 5 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 0 ) == null || die ();
        ok |= cache.getSilent ( 1 ) == null || die ();
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == 4 || die ();
        ok |= cache.getSilent ( 5 ) == 5 || die ();

        if ( !ok ) die ();

    }
}

最后一个LRU缓存是单线程的,请不要将它包装在同步的任何内容中....

这是对并发版本的攻击.

import java.util.Deque;
import java.util.LinkedList;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;

public class ConcurrentLruCache implements LruCache {

    private final ReentrantLock lock = new ReentrantLock ();


    private final Map map = new ConcurrentHashMap<> ();
    private final Deque queue = new LinkedList<> ();
    private final int limit;


    public ConcurrentLruCache ( int limit ) {
        this.limit = limit;
    }

    @Override
    public void put ( KEY key, VALUE value ) {
        VALUE oldValue = map.put ( key, value );
        if ( oldValue != null ) {
            removeThenAddKey ( key );
        } else {
            addKey ( key );
        }
        if (map.size () > limit) {
            map.remove ( removeLast() );
        }
    }


    @Override
    public VALUE get ( KEY key ) {
        removeThenAddKey ( key );
        return map.get ( key );
    }


    private void addKey(KEY key) {
        lock.lock ();
        try {
            queue.addFirst ( key );
        } finally {
            lock.unlock ();
        }


    }

    private KEY removeLast( ) {
        lock.lock ();
        try {
            final KEY removedKey = queue.removeLast ();
            return removedKey;
        } finally {
            lock.unlock ();
        }
    }

    private void removeThenAddKey(KEY key) {
        lock.lock ();
        try {
            queue.removeFirstOccurrence ( key );
            queue.addFirst ( key );
        } finally {
            lock.unlock ();
        }

    }

    private void removeFirstOccurrence(KEY key) {
        lock.lock ();
        try {
            queue.removeFirstOccurrence ( key );
        } finally {
            lock.unlock ();
        }

    }


    @Override
    public VALUE getSilent ( KEY key ) {
        return map.get ( key );
    }

    @Override
    public void remove ( KEY key ) {
        removeFirstOccurrence ( key );
        map.remove ( key );
    }

    @Override
    public int size () {
        return map.size ();
    }

    public String toString () {
        return map.toString ();
    }
}

主要区别在于使用ConcurrentHashMap而不是HashMap,以及Lock的使用(我本可以使用synchronized,但是......).

我没有在火中测试它,但它似乎是一个简单的LRU缓存,可能在80%的需要简单LRU映射的用例中运行.

I welcome feedback, except the why don't you use library a, b, or c. The reason I don't always use a library is because I don't always want every war file to be 80MB, and I write libraries so I tend to make the libs plug-able with a good enough solution in place and someone can plug-in another cache provider if they like. :) I never know when someone might need Guava or ehcache or something else I don't want to include them, but if I make caching plug-able, I will not exclude them either.

Reduction of dependencies has its own reward. I love to get some feedback on how to make this even simpler or faster or both.

Also if anyone knows of a ready to go....

Ok.. I know what you are thinking... Why doesn't he just use removeEldest entry from LinkedHashMap, and well I should but.... but.. but.. That would be a FIFO not an LRU and we were trying to implement a LRU.

    Map map = new LinkedHashMap () {

        @Override
        protected boolean removeEldestEntry ( Map.Entry eldest ) {
            return this.size () > limit;
        }
    };

This test fails for the above code...

        cache.get ( 2 );
        cache.get ( 3 );
        cache.put ( 6, 6 );
        cache.put ( 7, 7 );
        ok |= cache.size () == 4 || die ( "size" + cache.size () );
        ok |= cache.getSilent ( 2 ) == 2 || die ();
        ok |= cache.getSilent ( 3 ) == 3 || die ();
        ok |= cache.getSilent ( 4 ) == null || die ();
        ok |= cache.getSilent ( 5 ) == null || die ();

So here is a quick and dirty FIFO cache using removeEldestEntry.

import java.util.*;

public class FifoCache implements LruCache {

    final int limit;

    Map map = new LinkedHashMap () {

        @Override
        protected boolean removeEldestEntry ( Map.Entry eldest ) {
            return this.size () > limit;
        }
    };


    public LruCacheNormal ( int limit ) {
        this.limit = limit;
    }

    public void put ( KEY key, VALUE value ) {
         map.put ( key, value );


    }


    public VALUE get ( KEY key ) {

        return map.get ( key );
    }


    public VALUE getSilent ( KEY key ) {

        return map.get ( key );
    }

    public void remove ( KEY key ) {
        map.remove ( key );
    }

    public int size () {
        return map.size ();
    }

    public String toString() {
        return map.toString ();
    }
}

FIFOs are fast. No searching around. You could front a FIFO in front of an LRU and that would handle most hot entries quite nicely. A better LRU is going to need that reverse element to Node feature.

Anyway... now that I wrote some code, let me go through the other answers and see what I missed... the first time I scanned them.



4> 小智..:

LinkedHashMap是O(1),但需要同步.无需在那里重新发明轮子.

增加并发性的2个选项:

1.创建多个LinkedHashMap,并将其哈希:示例:LinkedHashMap[4], index 0, 1, 2, 3.在键上做key%4 (或binary OR打开[key, 3])选择要执行put/get/remove的映射.

2.您可以通过扩展来执行"几乎"LRU ConcurrentHashMap,并在其内部的每个区域中使用类似于结构的链接哈希映射.锁定将比LinkedHashMap同步锁定更精细.在列表的头部和尾部上putputIfAbsent仅需要锁定(每个区域).在删除或获取整个区域需要被锁定.我很好奇,如果Atomic链接列表在某种程度上可能有帮助 - 可能是列表的负责人.也许更多.

结构不会保留总订单,而只保留每个区域的订单.只要条目数远大于区域数,这对于大多数缓存来说都足够了.每个区域都必须有自己的入口计数,这将被用于驱逐触发器的全局计数.a中的默认区域数ConcurrentHashMap为16,这对于今天的大多数服务器来说都是充足的.

    在温和并发下更容易编写和更快.

    编写起来会更困难,但在非常高的并发性下可以更好地扩展.正常访问ConcurrentHashMap会慢一些(就像HashMap没有并发的情况一样慢)



5> Ron..:

有两个开源实现.

Apache Solr有ConcurrentLRUCache:https: //lucene.apache.org/solr/3_6_1/org/apache/solr/util/ConcurrentLRUCache.html

有一个ConcurrentLinkedHashMap的开源项目:http: //code.google.com/p/concurrentlinkedhashmap/


集成了一个简化版本,但测试尚未完成,因此尚未公开.我在进行更深层次的集成时遇到了很多问题,但我希望能够完成它,因为它有一些很好的算法属性.增加了侦听驱逐(容量,到期,GC)的能力,并且基于CLHM的方法(侦听器队列).我也想提出"加权值"的概念,因为这在缓存集合时很有用.不幸的是,由于其他的承诺,我已经太过淹没,无法投入番石榴应得的时间(而且我答应凯文/查尔斯).
更新:集成已在Guava r08中完成并公开.这通过#maximumSize()设置.
Solr的解决方案实际上不是LRU,但`ConcurrentLinkedHashMap`很有趣.它声称已经从Guava进入了"MapMaker",但我没有在文档中发现它.知道这项工作有什么用吗?

6> Steve McLeod..:

我会考虑使用java.util.concurrent.PriorityBlockingQueue,优先级由每个元素中的"numberOfUses"计数器确定.我会非常,非常小心地使我的所有同步都正确,因为"numberOfUses"计数器意味着该元素不能是不可变的.

元素对象将是缓存中对象的包装器:

class CacheElement {
    private final Object obj;
    private int numberOfUsers = 0;

    CacheElement(Object obj) {
        this.obj = obj;
    }

    ... etc.
}


请注意,如果您尝试执行steve mcleod提到的priorityblockingqueue版本,您应该使该元素不可变,因为在队列中修改元素将没有任何影响,您将需要删除该元素并重新添加它以便重新确定它的优先顺序.

7> 小智..:

希望这可以帮助 .

import java.util.*;
public class Lru {

public static  Map lruCache(final int maxSize) {
    return new LinkedHashMap(maxSize*4/3, 0.75f, true) {

        private static final long serialVersionUID = -3588047435434569014L;

        @Override
        protected boolean removeEldestEntry(Map.Entry eldest) {
            return size() > maxSize;
        }
    };
 }
 public static void main(String[] args ) {
    Map lru = Lru.lruCache(2);      
    lru.put("1", "1");
    lru.put("2", "2");
    lru.put("3", "3");
    System.out.println(lru);
}
}



8> 小智..:

LRU Cache可以使用ConcurrentLinkedQueue和ConcurrentHashMap实现,它也可以用于多线程场景.队列的头部是队列中最长时间的元素.队列的尾部是队列中最短时间的元素.当Map中存在元素时,我们可以将其从LinkedQueue中删除并将其插入尾部.

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;

public class LRUCache {
  private ConcurrentHashMap map;
  private ConcurrentLinkedQueue queue;
  private final int size; 

  public LRUCache(int size) {
    this.size = size;
    map = new ConcurrentHashMap(size);
    queue = new ConcurrentLinkedQueue();
  }

  public V get(K key) {
    //Recently accessed, hence move it to the tail
    queue.remove(key);
    queue.add(key);
    return map.get(key);
  }

  public void put(K key, V value) {
    //ConcurrentHashMap doesn't allow null key or values
    if(key == null || value == null) throw new NullPointerException();
    if(map.containsKey(key) {
      queue.remove(key);
    }
    if(queue.size() >= size) {
      K lruKey = queue.poll();
      if(lruKey != null) {
        map.remove(lruKey);
      }
    }
    queue.add(key);
    map.put(key,value);
  }

}

推荐阅读
夏晶阳--艺术
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有