一、官方介绍
- 基于Hash table实现的Map接口。此实现提供所有可选的Map操作,并允许空值和空键。(HashMap类大致相当于Hashtable,只是它是不同步的,并且允许为空。)该类不保证Map的顺序;特别是,它不能保证顺序在一段时间内保持不变。
- 这个实现为基本操作(get和put)提供了稳定的时间性能,假设 hash函数将元素适当地分散到各个bucket中。集合视图的迭代需要与HashMap实例的“容量”(bucket的数量)及其大小(键值映射的数量)成比例的时间。因此,如果迭代性能很重要,那么不要将初始容量设置得太高(或负载因子太低)是非常重要的。
- HashMap实例有两个影响其性能的参数:初始容量和负载因子。容量是hash table中的buckets数,初始容量只是创建hash table时的容量。负载因子是一个度量hash table在其容量自动增加之前允许的满度的度量。当hash table中的条目数超过负载因子和当前容量的乘积时,将对hash table进行重新哈希(即重新构建内部数据结构),使hash table的bucket数大约是桶数的两倍。
- 一般来说,默认的负载因子(.75)在时间和空间成本之间提供了很好的权衡。更高的值减少了空间开销,但增加了查找成本(反映在HashMap类的大多数操作中,包括get和put)。在设置map的初始容量时,应该考虑map中条目的期望数量及其负载因子,从而最小化rehash操作的数量。如果初始容量大于条目的最大数量除以负载因子,则不会发生重排操作。
- 如果要将许多映射存储在HashMap实例中,那么使用足够大的容量创建Map将比根据需要执行自动散列来扩展table更有效地存储Map。注意,使用具有相同hashCode()的多个键肯定会降低任何散列表的性能。为了改善影响,当键是可比较的,这个类可以使用键之间的比较顺序来帮助打破联系。
- 注意,这个实现不是同步的。如果多个线程同时访问一个散列映射,并且至少有一个线程从结构上修改了该映射,则必须在外部对其进行同步。(结构修改是指增加或删除一个或多个映射的操作;仅仅更改与一个实例已经包含的键相关联的值并不是结构修改。这通常是通过对一些自然封装了Map的对象进行同步来实现的。如果不存在这样的对象,则应该使用Collections.synchronizedMap方法包装map。这最好在创建时完成,以防止意外的不同步访问Map:
Map m = Collections.synchronizedMap(new HashMap(...));
- 这个类的iterator和listIterator方法返回的迭代器是 fail-fast的:如果在创建迭代器之后的任何时候,以任何方式(除了通过迭代器自己的删除或添加方法)对列表进行结构修改,迭代器将抛出ConcurrentModificationException异常。因此,在面对并发修改时,迭代器会快速而干净地失败,而不是在将来某个不确定的时间冒任意的、不确定的行为的风险。
- 注意,不能保证迭代器的快速故障行为,因为通常来说,在存在非同步并发修改的情况下,不可能做出任何严格的保证。Fail-fast迭代器在最大努力的基础上抛出ConcurrentModificationException。因此,编写一个依赖于这个异常的正确性的程序是错误的:迭代器的 fail-fast 行为应该只用于检测bug。
- 该类是Java集合框架的成员
二、注意
- 这个Map通常充当一个装着hashTable的箱子(bucketed),但是当箱子变得太大时,它们会被转换成treenode的箱子,每个箱子的结构与java.util.TreeMap中的箱子类似。大多数方法都尝试使用普通的箱子,但是在适用时中继到TreeNode方法(只需检查一个节点的instanceof)。树节点的存储箱可以像其他存储箱一样被遍历和使用,但是在过度填充时支持更快的查找。但是,由于正常使用的大多数箱子并没有被过度填充,所以在table方法的过程中可能会延迟检查树箱子的存在。
- tree bins (元素都是treenode的箱子)主要通过hashCode排序,但是对于tie,如果两个元素属于相同的“class C implements Comparable”,那么根据它们的compareTo方法来排序。
三、重点部分详解
1.前提
看源码之前,需要对一下几个概念有认知:
- capacity (容量):在hashMap初始化的时候指定,未指定为0
- threshold(阈值): threshold = 容量capacity * 负载因子loadfactor,当hashMap的容量到了threshold时就要进行resize扩容
- threshold会通过方法tableSizeFor(int cap) 进行赋值,保证每次结果是大于或等于cap值的一个是2的幂的值,例如给6返回8,给12返回16
- hashMap 是由Node数组组成的,Node是HashMap自定义的链表类,它的属性有hash、key、value、next
- 数组的每一个节点,可能为 单节点、链表、红黑树 这三种状态, 节点数量大于2由节点变链表, 节点数量大于8由链表变红黑树,节点数量小于6 由红黑树变链表
- 每一次扩容都会进行一次rehash使node分布均匀(这也是为什么在使用hashMap时,当数据量大的时候,初始化要尽量提给一个大的容量来减少rehash次数)。
2.重点方法分析
保证返回值是2的幂计算方法:
/**
* 保证返回的值是2的幂,即当cap是1-2-4-8-16-32...之间的任意数,(*2^N< cap <2^N+1),
* 返回值为 2^N+1。
* int n = cap - 1; 作用:保证当cap为2的幂时,返回原值而不是二倍,如8返回8 而不是16
*/
static final int tableSizeFor(int cap)
{
int n = cap - 1; // 作用:保证当cap为2的幂时,返回原值而不是二倍,如8 返回8 而不是16
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
hash值计算方法:
/*
* 将hashcode高位和低位的值做异或运算进一步混淆,
* 低位的信息中加入了高位的信息,这样做不但增加了复杂性,高位的信息也保留了下来。
*/
static final int hash(Object key)
{
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
扩容方法:
/**
* 初始化或加倍table大小。如果为空,则按照字段阈值中持有的初始容量分配。
* 否则,因为我们使用的是2的幂展开,所以每个bin中的元素必须保持相同的索引,
* 或者在新table中以2的幂偏移移动。
*/
final Node<K, V>[] resize()
{
Node<K, V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
/*
* 如果oldCap > 0 说明table非null, oldCap为原table大小,oldThr为原阈值大小 = oldCap *
* loadFactor
*/
if (oldCap > 0)
{
// 如果oldcap大于最大值,阈值直接等于int最大值,以后不会再扩容,否则 阈值扩大二倍
if (oldCap >= MAXIMUM_CAPACITY)
{
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
/*
* 走到这说明oldCap <= 0,若此时oldThr > 0 充分说明此时hashMap是一个table为空,threshold > 0
* 的初始状态(调用构造函数 HashMap(int initialCapacity, float loadFactor) 或
* HashMap(int initialCapacity) 或 HashMap(Map<? extends K, ? extends V>
* m),导致 table 为 null,Cap为0, threshold为用户指定的
* HashMap的初始容量),此时根据初始阈值直接初始化容量
*/
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
/*
* 走到这说明oldCap <= 0 oldThr <= 0,
* 此时hashMap处于使用HashMap()构造函数初始化的状态,Cap为0,thrrshold为0
*/
else
{ // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
// 如果新阈值为 0 , 创建新阈值
if (newThr == 0)
{
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings ({"rawtypes", "unchecked"})
Node<K, V>[] newTab = (Node<K, V>[])new Node[newCap];
table = newTab;
if (oldTab != null)
{
for (int j = 0; j < oldCap; ++j)
{
Node<K, V> e;
if ((e = oldTab[j]) != null)
{
// 释放资源
oldTab[j] = null;
// 若是单节点直接在newTab中重新定位
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
// 若节点是TreeNode节点,要进行 红黑树的 rehash操作
else if (e instanceof TreeNode)
((TreeNode<K, V>)e).split(this, newTab, j, oldCap);
// 若节点是链表,进行链表的rehash操作
else
{ // preserve order
Node<K, V> loHead = null, loTail = null;
Node<K, V> hiHead = null, hiTail = null;
Node<K, V> next;
do
{
next = e.next;
// 将同一桶中的元素根据(e.hash & oldCap)是否为0进行分割
// 根据算法 e.hash & oldCap 判断节点位置rehash 后是否发生改变
// 最高位==0,这是索引不变的链表。
if ((e.hash & oldCap) == 0)
{
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
// 最高位==1 (这是索引发生改变的链表)
else
{
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null)
{
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null)
{
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
以下部分转载于小北觅的博客《一文读懂HashMap》 作者讲的贼好,上述也是在读了作者的文章后理解的。建议大家若是没看明白去看一下,真的超级棒!
注:hash 冲突发生的几种情况:
1.两节点key 值相同(hash值一定相同),导致冲突;
2.两节点key 值不同,由于 hash 函数的局限性导致hash 值相同,冲突;
3.两节点key 值不同,hash 值不同,但 hash 值对数组长度取模后相同,冲突;
3. java1.7和1.8的HashMap的不同点
(1)JDK1.7用的是头插法,而JDK1.8及之后使用的都是尾插法,那么为什么要这样做呢?因为JDK1.7是用单链表进行的纵向延伸,当采用头插法就是能够提高插入的效率,但是也会容易出现逆序且环形链表死循环问题。但是在JDK1.8之后是因为加入了红黑树使用尾插法,能够避免出现逆序且链表死循环的问题。
(2)扩容后数据存储位置的计算方式也不一样:
在JDK1.7的时候是直接用hash值和需要扩容的二进制数进行&(这里就是为什么扩容的时候为啥一定必须是2的多少次幂的原因所在,因为如果只有2的n次幂的情况时最后一位二进制数才一定是1,这样能最大程度减少hash碰撞)(hash值 & length-1) 。
而在JDK1.8的时候直接用了JDK1.7的时候计算的规律,也就是扩容前的原始位置+扩容的大小值=JDK1.8的计算方式,而不再是JDK1.7的那种异或的方法。但是这种方式就相当于只需要判断Hash值的新增参与运算的位是0还是1就直接迅速计算出了扩容后的储存方式。
(3)JDK1.7的时候使用的是数组+ 单链表的数据结构。但是在JDK1.8及之后时,使用的是数组+链表+红黑树的数据结构(当链表的深度达到8的时候,也就是默认阈值,就会自动扩容把链表转成红黑树的数据结构来把时间复杂度从O(N)变成O(logN)提高了效率)。
4.HashMap为什么是线程不安全的?
HashMap 在并发时可能出现的问题主要是两方面:
put的时候导致的多线程数据不一致
比如有两个线程A和B,首先A希望插入一个key-value对到HashMap中,首先计算记录所要落到的 hash桶的索引坐标,然后获取到该桶里面的链表头结点,此时线程A的时间片用完了,而此时线程B被调度得以执行,和线程A一样执行,只不过线程B成功将记录插到了桶里面,假设线程A插入的记录计算出来的 hash桶索引和线程B要插入的记录计算出来的 hash桶索引是一样的,那么当线程B成功插入之后,线程A再次被调度运行时,它依然持有过期的链表头但是它对此一无所知,以至于它认为它应该这样做,如此一来就覆盖了线程B插入的记录,这样线程B插入的记录就凭空消失了,造成了数据不一致的行为。
resize而引起死循环
这种情况发生在HashMap自动扩容时,当2个线程同时检测到元素个数超过 数组大小 × 负载因子。此时2个线程会在put()方法中调用了resize(),两个线程同时修改一个链表结构会产生一个循环链表(JDK1.7中,会出现resize前后元素顺序倒置的情况)。接下来再想通过get()获取某一个元素,就会出现死循环。
5.HashMap和HashTable的区别
HashMap和Hashtable都实现了Map接口,但决定用哪一个之前先要弄清楚它们之间的分别。主要的区别有:线程安全性,同步(synchronization),以及速度。
HashMap几乎可以等价于Hashtable,除了HashMap是非synchronized的,并可以接受null(HashMap可以接受为null的键值(key)和值(value),而Hashtable则不行)。
HashMap是非synchronized,而Hashtable是synchronized,这意味着Hashtable是线程安全的,多个线程可以共享一个Hashtable;而如果没有正确的同步的话,多个线程是不能共享HashMap的。Java 5提供了ConcurrentHashMap,它是HashTable的替代,比HashTable的扩展性更好。
另一个区别是HashMap的迭代器(Iterator)是fail-fast迭代器,而Hashtable的enumerator迭代器不是fail-fast的。所以当有其它线程改变了HashMap的结构(增加或者移除元素),将会抛出ConcurrentModificationException,但迭代器本身的remove()方法移除元素则不会抛出ConcurrentModificationException异常。但这并不是一个一定发生的行为,要看JVM。这条同样也是Enumeration和Iterator的区别。
由于Hashtable是线程安全的也是synchronized,所以在单线程环境下它比HashMap要慢。如果你不需要同步,只需要单一线程,那么使用HashMap性能要好过Hashtable。
HashMap不能保证随着时间的推移Map中的元素次序是不变的。
需要注意的重要术语:
sychronized意味着在一次仅有一个线程能够更改Hashtable。就是说任何线程要更新Hashtable时要首先获得同步锁,其它线程要等到同步锁被释放之后才能再次获得同步锁更新Hashtable。
Fail-safe和iterator迭代器相关。如果某个集合对象创建了Iterator或者ListIterator,然后其它的线程试图“结构上”更改集合对象,将会抛出ConcurrentModificationException异常。但其它线程可以通过set()方法更改集合对象是允许的,因为这并没有从“结构上”更改集合。但是假如已经从结构上进行了更改,再调用set()方法,将会抛出IllegalArgumentException异常。
结构上的更改指的是删除或者插入一个元素,这样会影响到map的结构。
四、全部源码分析
package java.util;
import java.io.IOException;
import java.io.InvalidObjectException;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import sun.misc.SharedSecrets;
public class HashMap<K, V> extends AbstractMap<K, V> implements Map<K, V>, Cloneable, Serializable
{
private static final long serialVersionUID = 362498820763181265L;
/**
* 初始容量,必须是2的倍数
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
* 最大容量,如果更高的值是由任何一个带有参数的构造函数隐式指定的,则使用该值。必须是2的倍数
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 在构造函数中没有指定时使用的负载因子。
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* 使用tree而不是list的容器计数阈值。当向至少有这么多节点的bin中添加元素时,bin将被转换为tree。该值必须大于2,并且应该至少为8,以便与tree移除时关于收缩后转换回普通桶的假设相吻合。
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* The bin count threshold for untreeifying a (split) bin during a resize
* operation. Should be less than TREEIFY_THRESHOLD, and at most 6 to mesh
* with shrinkage detection under removal.
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* The smallest table capacity for which bins may be treeified. (Otherwise
* the table is resized if too many nodes in a bin.) Should be at least 4 *
* TREEIFY_THRESHOLD to avoid conflicts between resizing and treeification
* thresholds. 可以对容器进行treeified的最小table
* capacity。(否则,如果一个bin中有太多节点,就会重新调整table的大小。)至少4 *
* TREEIFY_THRESHOLD,以避免调整大小和treeification阀值之间的冲突。
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* Basic hash bin node, used for most entries. (See below for TreeNode
* subclass, and in LinkedHashMap for its Entry subclass.)
* 基本的哈希bin节点,用于大多数条目。(下面是TreeNode的子类,下面是LinkedHashMap的Entry子类。)
*/
static class Node<K, V> implements Map.Entry<K, V>
{
final int hash;
final K key;
V value;
Node<K, V> next;
Node(int hash, K key, V value, Node<K, V> next)
{
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey()
{
return key;
}
public final V getValue()
{
return value;
}
public final String toString()
{
return key + "=" + value;
}
public final int hashCode()
{
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue)
{
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o)
{
if (o == this)
return true;
if (o instanceof Map.Entry)
{
Map.Entry<?, ?> e = (Map.Entry<?, ?>)o;
if (Objects.equals(key, e.getKey()) && Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
/* ---------------- Static utilities -------------- */
/*
* 将hashcode高位和低位的值做异或运算进一步混淆,低位的信息中加入了高位的信息,这样做不但增加了复杂性,高位的信息也保留了下来。
*/
static final int hash(Object key)
{
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
/**
* Returns x's Class if it is of the form "class C implements
* Comparable<C>", else null.
*/
static Class<?> comparableClassFor(Object x)
{
if (x instanceof Comparable)
{
Class<?> c;
Type[] ts, as;
Type t;
ParameterizedType p;
if ((c = x.getClass()) == String.class) // bypass checks
return c;
if ((ts = c.getGenericInterfaces()) != null)
{
for (int i = 0; i < ts.length; ++i)
{
if (((t = ts[i]) instanceof ParameterizedType)
&& ((p = (ParameterizedType)t).getRawType() == Comparable.class)
&& (as = p.getActualTypeArguments()) != null && as.length == 1 && as[0] == c) // type
// arg
// is
// c
return c;
}
}
}
return null;
}
/**
* Returns k.compareTo(x) if x matches kc (k's screened comparable class),
* else 0.
*/
@SuppressWarnings ({"rawtypes", "unchecked"}) // for cast to Comparable
static int compareComparables(Class<?> kc, Object k, Object x)
{
return (x == null || x.getClass() != kc ? 0 : ((Comparable)k).compareTo(x));
}
/**
* 保证返回的值是2的幂,即当cap是1-2-4-8-16-32...之间的任意数,(*2^N< cap <2^N+1),返回值为2^N+1。 int
* n = cap - 1; 作用:保证当cap为2的幂时,返回原值而不是二倍,如8 返回8 而不是16
*/
static final int tableSizeFor(int cap)
{
int n = cap - 1; // 作用:保证当cap为2的幂时,返回原值而不是二倍,如8 返回8 而不是16
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}
/* ---------------- Fields -------------- */
/**
* table,第一次使用时初始化,并根据需要调整大小。当分配时,长度总是2的幂。(在某些操作中,我们还允许长度为零,以允许当前不需要的引导机制。)
* 实际存储key,value的数组,只不过key,value被封装成Node了
*/
transient Node<K, V>[] table;
/**
* H保存缓存entrySet().注意,AbstractMap字段( 域)用于keySet()和values()。
*/
transient Set<Map.Entry<K, V>> entrySet;
/**
* 此map中包含的键-值映射的数目。
*/
transient int size;
/**
* 结构修改次数
*/
transient int modCount;
/**
* 要调整大小的下一个大小值(容量*负载因子)。 The javadoc description is true upon
* serialization.Additionally, if the table array has not been allocated,
* this field holds the initial array capacity, or zero signifying
* DEFAULT_INITIAL_CAPACITY.)
*/
int threshold;
final float loadFactor;
/* ---------------- Public operations -------------- */
public HashMap(int initialCapacity, float loadFactor)
{
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " + loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
}
public HashMap(int initialCapacity)
{
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
/**
* 构造一个空的HashMap,默认初始容量(16)和默认负载因子(0.75)。
*/
public HashMap()
{
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
public HashMap(Map<? extends K, ? extends V> m)
{
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict)
{
int s = m.size();
if (s > 0)
{
if (table == null)
{ // 计算容量
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ? (int)ft : MAXIMUM_CAPACITY);
if (t > threshold)
threshold = tableSizeFor(t); // 设置下一个扩容容量值·
}
else if (s > threshold)
resize(); // 如果大于threshold 则先扩容
for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
{
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
}
}
}
public int size()
{
return size;
}
public boolean isEmpty()
{
return size == 0;
}
public V get(Object key)
{
Node<K, V> e;
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
final Node<K, V> getNode(int hash, Object key)
{
Node<K, V>[] tab;
Node<K, V> first, e;
int n;
K k;
if ((tab = table) != null && (n = tab.length) > 0 && (first = tab[(n - 1) & hash]) != null)
{
// 先判断第一个节点
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null)
{
// 若是树
if (first instanceof TreeNode)
return ((TreeNode<K, V>)first).getTreeNode(hash, key);
// 链表
do
{
if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
public boolean containsKey(Object key)
{
return getNode(hash(key), key) != null;
}
/**
* 将指定值与此Map中的指定键关联。如果Map之前包含键的映射,则替换旧值。
*/
public V put(K key, V value)
{
return putVal(hash(key), key, value, false, true);
}
/**
* Implements Map.put and related methods.
*
* @param hash hash for key
* @param key the key
* @param value the value to put
* @param onlyIfAbsent if true, don't change existing value
* @param evict if false, the table is in creation mode.
* @return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict)
{
Node<K, V>[] tab;
Node<K, V> p;
int n, i;
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
// 为空直接赋值
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else
{
Node<K, V> e;
K k;
// 单节点直接赋值
if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k))))
e = p;
// 树节点走树节点逻辑
else if (p instanceof TreeNode)
e = ((TreeNode<K, V>)p).putTreeVal(this, tab, hash, key, value);
// 链表节点走链表节点逻辑
else
{
for (int binCount = 0;; ++binCount)
{
if ((e = p.next) == null)
{
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
// 链表过长转红黑树
treeifyBin(tab, hash);
break;
}
if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
// 不为空替换旧值
if (e != null)
{ // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
/**
* 初始化或加倍表大小。如果为空,则按照字段阈值中持有的初始容量目标分配。否则,因为我们使用的是2的幂展开,所以每个bin中的元素必须保持相同的索引,或者在新表中以2的幂偏移移动。
*
* @return the table
*/
final Node<K, V>[] resize()
{
Node<K, V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
/*
* 如果oldCap > 0 说明table非null, oldCap为原table大小,oldThr为原阈值大小 = oldCap *
* loadFactor
*/
if (oldCap > 0)
{
// 如果oldcap大于最大值,阈值直接等于int最大值,以后不会再扩容,否则 阈值扩大二倍
if (oldCap >= MAXIMUM_CAPACITY)
{
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
/*
* 走到这说明oldCap <= 0,若此时oldThr > 0 充分说明此时hashMap是一个 table为空,threshold > 0
* 的初始状态(调用构造函数 HashMap(int initialCapacity, float loadFactor) 或
* HashMap(int initialCapacity) 或 HashMap(Map<? extends K, ? extends V>
* m),导致 table 为 null,Cap为0, threshold为用户指定的
* HashMap的初始容量),此时根据初始阈值直接初始化容量
*/
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
/*
* 走到这说明oldCap <= 0 oldThr <= 0,
* 此时hashMap处于使用HashMap()构造函数初始化的状态,Cap为0,thrrshold为0
*/
else
{ // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
// 如果新阈值为 0 , 创建新阈值
if (newThr == 0)
{
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings ({"rawtypes", "unchecked"})
Node<K, V>[] newTab = (Node<K, V>[])new Node[newCap];
table = newTab;
if (oldTab != null)
{
for (int j = 0; j < oldCap; ++j)
{
Node<K, V> e;
if ((e = oldTab[j]) != null)
{
// 释放资源
oldTab[j] = null;
// 若是单节点直接在newTab中重新定位
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
// 若节点是TreeNode节点,要进行 红黑树的 rehash操作
else if (e instanceof TreeNode)
((TreeNode<K, V>)e).split(this, newTab, j, oldCap);
// 若节点是链表,进行链表的rehash操作
else
{ // preserve order
Node<K, V> loHead = null, loTail = null;
Node<K, V> hiHead = null, hiTail = null;
Node<K, V> next;
do
{
next = e.next;
// 将同一桶中的元素根据(e.hash & oldCap)是否为0进行分割
// 根据算法 e.hash & oldCap 判断节点位置rehash 后是否发生改变
// 最高位==0,这是索引不变的链表。
if ((e.hash & oldCap) == 0)
{
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
// 最高位==1 (这是索引发生改变的链表)
else
{
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null)
{
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null)
{
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
/**
* Replaces all linked nodes in bin at index for given hash unless table is
* too small, in which case resizes instead.
*/
final void treeifyBin(Node<K, V>[] tab, int hash)
{
int n, index;
Node<K, V> e;
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) != null)
{
TreeNode<K, V> hd = null, tl = null;
do
{
TreeNode<K, V> p = replacementTreeNode(e, null);
if (tl == null)
hd = p;
else
{
p.prev = tl;
tl.next = p;
}
tl = p;
} while ((e = e.next) != null);
if ((tab[index] = hd) != null)
hd.treeify(tab);
}
}
public void putAll(Map<? extends K, ? extends V> m)
{
putMapEntries(m, true);
}
public V remove(Object key)
{
Node<K, V> e;
return (e = removeNode(hash(key), key, null, false, true)) == null ? null : e.value;
}
/*
* 移除和添加类似
*/
final Node<K, V> removeNode(int hash, Object key, Object value, boolean matchValue, boolean movable)
{
Node<K, V>[] tab;
Node<K, V> p;
int n, index;
if ((tab = table) != null && (n = tab.length) > 0 && (p = tab[index = (n - 1) & hash]) != null)
{
Node<K, V> node = null, e;
K k;
V v;
if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k))))
node = p;
else if ((e = p.next) != null)
{
if (p instanceof TreeNode)
node = ((TreeNode<K, V>)p).getTreeNode(hash, key);
else
{
do
{
if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))
{
node = e;
break;
}
p = e;
} while ((e = e.next) != null);
}
}
if (node != null && (!matchValue || (v = node.value) == value || (value != null && value.equals(v))))
{
if (node instanceof TreeNode)
((TreeNode<K, V>)node).removeTreeNode(this, tab, movable);
else if (node == p)
tab[index] = node.next;
else
p.next = node.next;
++modCount;
--size;
afterNodeRemoval(node);
return node;
}
}
return null;
}
public void clear()
{
Node<K, V>[] tab;
modCount++;
if ((tab = table) != null && size > 0)
{
size = 0;
for (int i = 0; i < tab.length; ++i)
tab[i] = null;
}
}
public boolean containsValue(Object value)
{
Node<K, V>[] tab;
V v;
if ((tab = table) != null && size > 0)
{
// 遍历table
for (int i = 0; i < tab.length; ++i)
{
// 遍历节点
for (Node<K, V> e = tab[i]; e != null; e = e.next)
{
if ((v = e.value) == value || (value != null && value.equals(v)))
return true;
}
}
}
return false;
}
public Set<K> keySet()
{
Set<K> ks = keySet;
if (ks == null)
{
ks = new KeySet();
keySet = ks;
}
return ks;
}
final class KeySet extends AbstractSet<K>
{
public final int size()
{
return size;
}
public final void clear()
{
HashMap.this.clear();
}
public final Iterator<K> iterator()
{
return new KeyIterator();
}
public final boolean contains(Object o)
{
return containsKey(o);
}
public final boolean remove(Object key)
{
return removeNode(hash(key), key, null, false, true) != null;
}
public final Spliterator<K> spliterator()
{
return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super K> action)
{
Node<K, V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null)
{
int mc = modCount;
for (int i = 0; i < tab.length; ++i)
{
for (Node<K, V> e = tab[i]; e != null; e = e.next)
action.accept(e.key);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
public Collection<V> values()
{
Collection<V> vs = values;
if (vs == null)
{
vs = new Values();
values = vs;
}
return vs;
}
final class Values extends AbstractCollection<V>
{
public final int size()
{
return size;
}
public final void clear()
{
HashMap.this.clear();
}
public final Iterator<V> iterator()
{
return new ValueIterator();
}
public final boolean contains(Object o)
{
return containsValue(o);
}
public final Spliterator<V> spliterator()
{
return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super V> action)
{
Node<K, V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null)
{
int mc = modCount;
for (int i = 0; i < tab.length; ++i)
{
for (Node<K, V> e = tab[i]; e != null; e = e.next)
action.accept(e.value);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
public Set<Map.Entry<K, V>> entrySet()
{
Set<Map.Entry<K, V>> es;
return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
}
final class EntrySet extends AbstractSet<Map.Entry<K, V>>
{
public final int size()
{
return size;
}
public final void clear()
{
HashMap.this.clear();
}
public final Iterator<Map.Entry<K, V>> iterator()
{
return new EntryIterator();
}
public final boolean contains(Object o)
{
if (!(o instanceof Map.Entry))
return false;
Map.Entry<?, ?> e = (Map.Entry<?, ?>)o;
Object key = e.getKey();
Node<K, V> candidate = getNode(hash(key), key);
return candidate != null && candidate.equals(e);
}
public final boolean remove(Object o)
{
if (o instanceof Map.Entry)
{
Map.Entry<?, ?> e = (Map.Entry<?, ?>)o;
Object key = e.getKey();
Object value = e.getValue();
return removeNode(hash(key), key, value, true, true) != null;
}
return false;
}
public final Spliterator<Map.Entry<K, V>> spliterator()
{
return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
}
public final void forEach(Consumer<? super Map.Entry<K, V>> action)
{
Node<K, V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null)
{
int mc = modCount;
for (int i = 0; i < tab.length; ++i)
{
for (Node<K, V> e = tab[i]; e != null; e = e.next)
action.accept(e);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
}
// Overrides of JDK8 Map extension methods
@Override
public V getOrDefault(Object key, V defaultValue)
{
Node<K, V> e;
return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
}
@Override
public V putIfAbsent(K key, V value)
{
return putVal(hash(key), key, value, true, true);
}
@Override
public boolean remove(Object key, Object value)
{
return removeNode(hash(key), key, value, true, true) != null;
}
@Override
public boolean replace(K key, V oldValue, V newValue)
{
Node<K, V> e;
V v;
if ((e = getNode(hash(key), key)) != null && ((v = e.value) == oldValue || (v != null && v.equals(oldValue))))
{
e.value = newValue;
afterNodeAccess(e);
return true;
}
return false;
}
@Override
public V replace(K key, V value)
{
Node<K, V> e;
if ((e = getNode(hash(key), key)) != null)
{
V oldValue = e.value;
e.value = value;
afterNodeAccess(e);
return oldValue;
}
return null;
}
@Override
public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction)
{
if (mappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K, V>[] tab;
Node<K, V> first;
int n, i;
int binCount = 0;
TreeNode<K, V> t = null;
Node<K, V> old = null;
if (size > threshold || (tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null)
{
// 红黑树
if (first instanceof TreeNode)
old = (t = (TreeNode<K, V>)first).getTreeNode(hash, key);
else
{
// 链表
Node<K, V> e = first;
K k;
do
{
if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))
{
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
V oldValue;
if (old != null && (oldValue = old.value) != null)
{
afterNodeAccess(old);
return oldValue;
}
}
// 计算
V v = mappingFunction.apply(key);
if (v == null)
{
return null;
}
// 链表
else if (old != null)
{
old.value = v;
afterNodeAccess(old);
return v;
}
// 树
else if (t != null)
t.putTreeVal(this, tab, hash, key, v);
else
{
tab[i] = newNode(hash, key, v, first);
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
return v;
}
public V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)
{
if (remappingFunction == null)
throw new NullPointerException();
Node<K, V> e;
V oldValue;
int hash = hash(key);
if ((e = getNode(hash, key)) != null && (oldValue = e.value) != null)
{
V v = remappingFunction.apply(key, oldValue);
if (v != null)
{
e.value = v;
afterNodeAccess(e);
return v;
}
// =null直接移除?
else
removeNode(hash, key, null, false, true);
}
return null;
}
@Override
public V compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction)
{
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K, V>[] tab;
Node<K, V> first;
int n, i;
int binCount = 0;
TreeNode<K, V> t = null;
Node<K, V> old = null;
if (size > threshold || (tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null)
{
if (first instanceof TreeNode)
old = (t = (TreeNode<K, V>)first).getTreeNode(hash, key);
else
{
Node<K, V> e = first;
K k;
do
{
if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))
{
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
}
V oldValue = (old == null) ? null : old.value;
V v = remappingFunction.apply(key, oldValue);
if (old != null)
{
if (v != null)
{
old.value = v;
afterNodeAccess(old);
}
else
removeNode(hash, key, null, false, true);
}
else if (v != null)
{
if (t != null)
t.putTreeVal(this, tab, hash, key, v);
else
{
tab[i] = newNode(hash, key, v, first);
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
}
return v;
}
@Override
public V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction)
{
if (value == null)
throw new NullPointerException();
if (remappingFunction == null)
throw new NullPointerException();
int hash = hash(key);
Node<K, V>[] tab;
Node<K, V> first;
int n, i;
int binCount = 0;
TreeNode<K, V> t = null;
Node<K, V> old = null;
if (size > threshold || (tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
if ((first = tab[i = (n - 1) & hash]) != null)
{
if (first instanceof TreeNode)
old = (t = (TreeNode<K, V>)first).getTreeNode(hash, key);
else
{
Node<K, V> e = first;
K k;
do
{
if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))
{
old = e;
break;
}
++binCount;
} while ((e = e.next) != null);
}
}
if (old != null)
{
V v;
if (old.value != null)
v = remappingFunction.apply(old.value, value);
else
v = value;
if (v != null)
{
old.value = v;
afterNodeAccess(old);
}
else
removeNode(hash, key, null, false, true);
return v;
}
if (value != null)
{
if (t != null)
t.putTreeVal(this, tab, hash, key, value);
else
{
tab[i] = newNode(hash, key, value, first);
if (binCount >= TREEIFY_THRESHOLD - 1)
treeifyBin(tab, hash);
}
++modCount;
++size;
afterNodeInsertion(true);
}
return value;
}
@Override
public void forEach(BiConsumer<? super K, ? super V> action)
{
Node<K, V>[] tab;
if (action == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null)
{
int mc = modCount;
for (int i = 0; i < tab.length; ++i)
{
for (Node<K, V> e = tab[i]; e != null; e = e.next)
action.accept(e.key, e.value);
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
@Override
public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function)
{
Node<K, V>[] tab;
if (function == null)
throw new NullPointerException();
if (size > 0 && (tab = table) != null)
{
int mc = modCount;
for (int i = 0; i < tab.length; ++i)
{
for (Node<K, V> e = tab[i]; e != null; e = e.next)
{
e.value = function.apply(e.key, e.value);
}
}
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
/* ------------------------------------------------------------ */
// Cloning and serialization
/**
* Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
* values themselves are not cloned.
*
* @return a shallow copy of this map
*/
@SuppressWarnings ("unchecked")
@Override
public Object clone()
{
HashMap<K, V> result;
try
{
result = (HashMap<K, V>)super.clone();
}
catch (CloneNotSupportedException e)
{
// this shouldn't happen, since we are Cloneable
throw new InternalError(e);
}
result.reinitialize();
result.putMapEntries(this, false);
return result;
}
// These methods are also used when serializing HashSets
final float loadFactor()
{
return loadFactor;
}
final int capacity()
{
return (table != null) ? table.length : (threshold > 0) ? threshold : DEFAULT_INITIAL_CAPACITY;
}
private void writeObject(java.io.ObjectOutputStream s) throws IOException
{
int buckets = capacity();
// Write out the threshold, loadfactor, and any hidden stuff
s.defaultWriteObject();
s.writeInt(buckets);
s.writeInt(size);
internalWriteEntries(s);
}
private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException
{
// Read in the threshold (ignored), loadfactor, and any hidden stuff
s.defaultReadObject();
reinitialize();
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new InvalidObjectException("Illegal load factor: " + loadFactor);
s.readInt(); // Read and ignore number of buckets
int mappings = s.readInt(); // Read number of mappings (size)
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " + mappings);
else if (mappings > 0)
{ // (if zero, use defaults)
// Size the table using given load factor only if within
// range of 0.25...4.0
float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
float fc = (float)mappings / lf + 1.0f;
int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ? DEFAULT_INITIAL_CAPACITY
: (fc >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)fc));
float ft = (float)cap * lf;
threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ? (int)ft : Integer.MAX_VALUE);
// Check Map.Entry[].class since it's the nearest public type to
// what we're actually creating.
SharedSecrets.getJavaOISAccess().checkArray(s, Map.Entry[].class, cap);
@SuppressWarnings ({"rawtypes", "unchecked"})
Node<K, V>[] tab = (Node<K, V>[])new Node[cap];
table = tab;
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++)
{
@SuppressWarnings ("unchecked")
K key = (K)s.readObject();
@SuppressWarnings ("unchecked")
V value = (V)s.readObject();
putVal(hash(key), key, value, false, false);
}
}
}
/* ------------------------------------------------------------ */
// iterators
abstract class HashIterator
{
Node<K, V> next; // next entry to return
Node<K, V> current; // current entry
int expectedModCount; // for fast-fail
int index; // current slot
HashIterator()
{
expectedModCount = modCount;
Node<K, V>[] t = table;
current = next = null;
index = 0;
if (t != null && size > 0)
{ // advance to first entry
do
{} while (index < t.length && (next = t[index++]) == null);
}
}
public final boolean hasNext()
{
return next != null;
}
final Node<K, V> nextNode()
{
Node<K, V>[] t;
Node<K, V> e = next;
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
if (e == null)
throw new NoSuchElementException();
if ((next = (current = e).next) == null && (t = table) != null)
{
do
{} while (index < t.length && (next = t[index++]) == null);
}
return e;
}
public final void remove()
{
Node<K, V> p = current;
if (p == null)
throw new IllegalStateException();
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
current = null;
K key = p.key;
removeNode(hash(key), key, null, false, false);
expectedModCount = modCount;
}
}
final class KeyIterator extends HashIterator implements Iterator<K>
{
public final K next()
{
return nextNode().key;
}
}
final class ValueIterator extends HashIterator implements Iterator<V>
{
public final V next()
{
return nextNode().value;
}
}
final class EntryIterator extends HashIterator implements Iterator<Map.Entry<K, V>>
{
public final Map.Entry<K, V> next()
{
return nextNode();
}
}
/* ------------------------------------------------------------ */
// spliterators
static class HashMapSpliterator<K, V>
{
final HashMap<K, V> map;
Node<K, V> current; // current node
int index; // current index, modified on advance/split
int fence; // one past last index
int est; // size estimate
int expectedModCount; // for comodification checks
HashMapSpliterator(HashMap<K, V> m, int origin, int fence, int est, int expectedModCount)
{
this.map = m;
this.index = origin;
this.fence = fence;
this.est = est;
this.expectedModCount = expectedModCount;
}
final int getFence()
{ // initialize fence and size on first use
int hi;
if ((hi = fence) < 0)
{
HashMap<K, V> m = map;
est = m.size;
expectedModCount = m.modCount;
Node<K, V>[] tab = m.table;
hi = fence = (tab == null) ? 0 : tab.length;
}
return hi;
}
public final long estimateSize()
{
getFence(); // force init
return (long)est;
}
}
static final class KeySpliterator<K, V> extends HashMapSpliterator<K, V> implements Spliterator<K>
{
KeySpliterator(HashMap<K, V> m, int origin, int fence, int est, int expectedModCount)
{
super(m, origin, fence, est, expectedModCount);
}
public KeySpliterator<K, V> trySplit()
{
int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
return (lo >= mid || current != null) ? null
: new KeySpliterator<>(map, lo, index = mid, est >>>= 1, expectedModCount);
}
public void forEachRemaining(Consumer<? super K> action)
{
int i, hi, mc;
if (action == null)
throw new NullPointerException();
HashMap<K, V> m = map;
Node<K, V>[] tab = m.table;
if ((hi = fence) < 0)
{
mc = expectedModCount = m.modCount;
hi = fence = (tab == null) ? 0 : tab.length;
}
else
mc = expectedModCount;
if (tab != null && tab.length >= hi && (i = index) >= 0 && (i < (index = hi) || current != null))
{
Node<K, V> p = current;
current = null;
do
{
if (p == null)
p = tab[i++];
else
{
action.accept(p.key);
p = p.next;
}
} while (p != null || i < hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer<? super K> action)
{
int hi;
if (action == null)
throw new NullPointerException();
Node<K, V>[] tab = map.table;
if (tab != null && tab.length >= (hi = getFence()) && index >= 0)
{
while (current != null || index < hi)
{
if (current == null)
current = tab[index++];
else
{
K k = current.key;
current = current.next;
action.accept(k);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics()
{
return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) | Spliterator.DISTINCT;
}
}
static final class ValueSpliterator<K, V> extends HashMapSpliterator<K, V> implements Spliterator<V>
{
ValueSpliterator(HashMap<K, V> m, int origin, int fence, int est, int expectedModCount)
{
super(m, origin, fence, est, expectedModCount);
}
public ValueSpliterator<K, V> trySplit()
{
int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
return (lo >= mid || current != null) ? null
: new ValueSpliterator<>(map, lo, index = mid, est >>>= 1, expectedModCount);
}
public void forEachRemaining(Consumer<? super V> action)
{
int i, hi, mc;
if (action == null)
throw new NullPointerException();
HashMap<K, V> m = map;
Node<K, V>[] tab = m.table;
if ((hi = fence) < 0)
{
mc = expectedModCount = m.modCount;
hi = fence = (tab == null) ? 0 : tab.length;
}
else
mc = expectedModCount;
if (tab != null && tab.length >= hi && (i = index) >= 0 && (i < (index = hi) || current != null))
{
Node<K, V> p = current;
current = null;
do
{
if (p == null)
p = tab[i++];
else
{
action.accept(p.value);
p = p.next;
}
} while (p != null || i < hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer<? super V> action)
{
int hi;
if (action == null)
throw new NullPointerException();
Node<K, V>[] tab = map.table;
if (tab != null && tab.length >= (hi = getFence()) && index >= 0)
{
while (current != null || index < hi)
{
if (current == null)
current = tab[index++];
else
{
V v = current.value;
current = current.next;
action.accept(v);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics()
{
return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
}
}
static final class EntrySpliterator<K, V> extends HashMapSpliterator<K, V> implements Spliterator<Map.Entry<K, V>>
{
EntrySpliterator(HashMap<K, V> m, int origin, int fence, int est, int expectedModCount)
{
super(m, origin, fence, est, expectedModCount);
}
public EntrySpliterator<K, V> trySplit()
{
int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
return (lo >= mid || current != null) ? null
: new EntrySpliterator<>(map, lo, index = mid, est >>>= 1, expectedModCount);
}
public void forEachRemaining(Consumer<? super Map.Entry<K, V>> action)
{
int i, hi, mc;
if (action == null)
throw new NullPointerException();
HashMap<K, V> m = map;
Node<K, V>[] tab = m.table;
if ((hi = fence) < 0)
{
mc = expectedModCount = m.modCount;
hi = fence = (tab == null) ? 0 : tab.length;
}
else
mc = expectedModCount;
if (tab != null && tab.length >= hi && (i = index) >= 0 && (i < (index = hi) || current != null))
{
Node<K, V> p = current;
current = null;
do
{
if (p == null)
p = tab[i++];
else
{
action.accept(p);
p = p.next;
}
} while (p != null || i < hi);
if (m.modCount != mc)
throw new ConcurrentModificationException();
}
}
public boolean tryAdvance(Consumer<? super Map.Entry<K, V>> action)
{
int hi;
if (action == null)
throw new NullPointerException();
Node<K, V>[] tab = map.table;
if (tab != null && tab.length >= (hi = getFence()) && index >= 0)
{
while (current != null || index < hi)
{
if (current == null)
current = tab[index++];
else
{
Node<K, V> e = current;
current = current.next;
action.accept(e);
if (map.modCount != expectedModCount)
throw new ConcurrentModificationException();
return true;
}
}
}
return false;
}
public int characteristics()
{
return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) | Spliterator.DISTINCT;
}
}
/* ------------------------------------------------------------ */
// LinkedHashMap support
/*
* The following package-protected methods are designed to be overridden by
* LinkedHashMap, but not by any other subclass. Nearly all other internal
* methods are also package-protected but are declared final, so can be used
* by LinkedHashMap, view classes, and HashSet.
*/
// Create a regular (non-tree) node
Node<K, V> newNode(int hash, K key, V value, Node<K, V> next)
{
return new Node<>(hash, key, value, next);
}
// For conversion from TreeNodes to plain nodes
Node<K, V> replacementNode(Node<K, V> p, Node<K, V> next)
{
return new Node<>(p.hash, p.key, p.value, next);
}
// Create a tree bin node
TreeNode<K, V> newTreeNode(int hash, K key, V value, Node<K, V> next)
{
return new TreeNode<>(hash, key, value, next);
}
// For treeifyBin
TreeNode<K, V> replacementTreeNode(Node<K, V> p, Node<K, V> next)
{
return new TreeNode<>(p.hash, p.key, p.value, next);
}
/**
* Reset to initial default state. Called by clone and readObject.
*/
void reinitialize()
{
table = null;
entrySet = null;
keySet = null;
values = null;
modCount = 0;
threshold = 0;
size = 0;
}
// Callbacks to allow LinkedHashMap post-actions
void afterNodeAccess(Node<K, V> p)
{}
void afterNodeInsertion(boolean evict)
{}
void afterNodeRemoval(Node<K, V> p)
{}
// Called only from writeObject, to ensure compatible ordering.
void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException
{
Node<K, V>[] tab;
if (size > 0 && (tab = table) != null)
{
for (int i = 0; i < tab.length; ++i)
{
for (Node<K, V> e = tab[i]; e != null; e = e.next)
{
s.writeObject(e.key);
s.writeObject(e.value);
}
}
}
}
/* ------------------------------------------------------------ */
// Tree bins
/**
* Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn extends
* Node) so can be used as extension of either regular or linked node.
*/
static final class TreeNode<K, V> extends LinkedHashMap.Entry<K, V>
{
TreeNode<K, V> parent; // red-black tree links
TreeNode<K, V> left;
TreeNode<K, V> right;
TreeNode<K, V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K, V> next)
{
super(hash, key, val, next);
}
/**
* Returns root of tree containing this node.
*/
final TreeNode<K, V> root()
{
for (TreeNode<K, V> r = this, p;;)
{
if ((p = r.parent) == null)
return r;
r = p;
}
}
/**
* Ensures that the given root is the first node of its bin.
*/
static <K, V> void moveRootToFront(Node<K, V>[] tab, TreeNode<K, V> root)
{
int n;
if (root != null && tab != null && (n = tab.length) > 0)
{
int index = (n - 1) & root.hash;
TreeNode<K, V> first = (TreeNode<K, V>)tab[index];
if (root != first)
{
Node<K, V> rn;
tab[index] = root;
TreeNode<K, V> rp = root.prev;
if ((rn = root.next) != null)
((TreeNode<K, V>)rn).prev = rp;
if (rp != null)
rp.next = rn;
if (first != null)
first.prev = root;
root.next = first;
root.prev = null;
}
assert checkInvariants(root);
}
}
/**
* Finds the node starting at root p with the given hash and key. The kc
* argument caches comparableClassFor(key) upon first use comparing
* keys.
*/
final TreeNode<K, V> find(int h, Object k, Class<?> kc)
{
TreeNode<K, V> p = this;
do
{
int ph, dir;
K pk;
TreeNode<K, V> pl = p.left, pr = p.right, q;
if ((ph = p.hash) > h)
p = pl;
else if (ph < h)
p = pr;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if (pl == null)
p = pr;
else if (pr == null)
p = pl;
else if ((kc != null || (kc = comparableClassFor(k)) != null)
&& (dir = compareComparables(kc, k, pk)) != 0)
p = (dir < 0) ? pl : pr;
else if ((q = pr.find(h, k, kc)) != null)
return q;
else
p = pl;
} while (p != null);
return null;
}
/**
* Calls find for root node.
*/
final TreeNode<K, V> getTreeNode(int h, Object k)
{
return ((parent != null) ? root() : this).find(h, k, null);
}
/**
* Tie-breaking utility for ordering insertions when equal hashCodes and
* non-comparable. We don't require a total order, just a consistent
* insertion rule to maintain equivalence across rebalancings.
* Tie-breaking further than necessary simplifies testing a bit.
*/
static int tieBreakOrder(Object a, Object b)
{
int d;
if (a == null || b == null || (d = a.getClass().getName().compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) <= System.identityHashCode(b) ? -1 : 1);
return d;
}
/**
* Forms tree of the nodes linked from this node.
*/
final void treeify(Node<K, V>[] tab)
{
TreeNode<K, V> root = null;
for (TreeNode<K, V> x = this, next; x != null; x = next)
{
next = (TreeNode<K, V>)x.next;
x.left = x.right = null;
if (root == null)
{
x.parent = null;
x.red = false;
root = x;
}
else
{
K k = x.key;
int h = x.hash;
Class<?> kc = null;
for (TreeNode<K, V> p = root;;)
{
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null && (kc = comparableClassFor(k)) == null)
|| (dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode<K, V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null)
{
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root);
}
/**
* Returns a list of non-TreeNodes replacing those linked from this
* node.
*/
final Node<K, V> untreeify(HashMap<K, V> map)
{
Node<K, V> hd = null, tl = null;
for (Node<K, V> q = this; q != null; q = q.next)
{
Node<K, V> p = map.replacementNode(q, null);
if (tl == null)
hd = p;
else
tl.next = p;
tl = p;
}
return hd;
}
/**
* Tree version of putVal.
*/
final TreeNode<K, V> putTreeVal(HashMap<K, V> map, Node<K, V>[] tab, int h, K k, V v)
{
Class<?> kc = null;
boolean searched = false;
TreeNode<K, V> root = (parent != null) ? root() : this;
for (TreeNode<K, V> p = root;;)
{
int dir, ph;
K pk;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if ((kc == null && (kc = comparableClassFor(k)) == null)
|| (dir = compareComparables(kc, k, pk)) == 0)
{
if (!searched)
{
TreeNode<K, V> q, ch;
searched = true;
if (((ch = p.left) != null && (q = ch.find(h, k, kc)) != null)
|| ((ch = p.right) != null && (q = ch.find(h, k, kc)) != null))
return q;
}
dir = tieBreakOrder(k, pk);
}
TreeNode<K, V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null)
{
Node<K, V> xpn = xp.next;
TreeNode<K, V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0)
xp.left = x;
else
xp.right = x;
xp.next = x;
x.parent = x.prev = xp;
if (xpn != null)
((TreeNode<K, V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x));
return null;
}
}
}
/**
* Removes the given node, that must be present before this call. This
* is messier than typical red-black deletion code because we cannot
* swap the contents of an interior node with a leaf successor that is
* pinned by "next" pointers that are accessible independently during
* traversal. So instead we swap the tree linkages. If the current tree
* appears to have too few nodes, the bin is converted back to a plain
* bin. (The test triggers somewhere between 2 and 6 nodes, depending on
* tree structure).
*/
final void removeTreeNode(HashMap<K, V> map, Node<K, V>[] tab, boolean movable)
{
int n;
if (tab == null || (n = tab.length) == 0)
return;
int index = (n - 1) & hash;
TreeNode<K, V> first = (TreeNode<K, V>)tab[index], root = first, rl;
TreeNode<K, V> succ = (TreeNode<K, V>)next, pred = prev;
if (pred == null)
tab[index] = first = succ;
else
pred.next = succ;
if (succ != null)
succ.prev = pred;
if (first == null)
return;
if (root.parent != null)
root = root.root();
if (root == null || (movable && (root.right == null || (rl = root.left) == null || rl.left == null)))
{
tab[index] = first.untreeify(map); // too small
return;
}
TreeNode<K, V> p = this, pl = left, pr = right, replacement;
if (pl != null && pr != null)
{
TreeNode<K, V> s = pr, sl;
while ((sl = s.left) != null) // find successor
s = sl;
boolean c = s.red;
s.red = p.red;
p.red = c; // swap colors
TreeNode<K, V> sr = s.right;
TreeNode<K, V> pp = p.parent;
if (s == pr)
{ // p was s's direct parent
p.parent = s;
s.right = p;
}
else
{
TreeNode<K, V> sp = s.parent;
if ((p.parent = sp) != null)
{
if (s == sp.left)
sp.left = p;
else
sp.right = p;
}
if ((s.right = pr) != null)
pr.parent = s;
}
p.left = null;
if ((p.right = sr) != null)
sr.parent = p;
if ((s.left = pl) != null)
pl.parent = s;
if ((s.parent = pp) == null)
root = s;
else if (p == pp.left)
pp.left = s;
else
pp.right = s;
if (sr != null)
replacement = sr;
else
replacement = p;
}
else if (pl != null)
replacement = pl;
else if (pr != null)
replacement = pr;
else
replacement = p;
if (replacement != p)
{
TreeNode<K, V> pp = replacement.parent = p.parent;
if (pp == null)
root = replacement;
else if (p == pp.left)
pp.left = replacement;
else
pp.right = replacement;
p.left = p.right = p.parent = null;
}
TreeNode<K, V> r = p.red ? root : balanceDeletion(root, replacement);
if (replacement == p)
{ // detach
TreeNode<K, V> pp = p.parent;
p.parent = null;
if (pp != null)
{
if (p == pp.left)
pp.left = null;
else if (p == pp.right)
pp.right = null;
}
}
if (movable)
moveRootToFront(tab, r);
}
/**
* 将一个树状容器中的节点分成上下两个树状容器,如果现在太小,则取消树状容器。仅从resize调用;参见上面关于分割位和索引的讨论。
*/
final void split(HashMap<K, V> map, Node<K, V>[] tab, int index, int bit)
{
TreeNode<K, V> b = this;
// Relink into lo and hi lists, preserving order
TreeNode<K, V> loHead = null, loTail = null;
TreeNode<K, V> hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for (TreeNode<K, V> e = b, next; e != null; e = next)
{
next = (TreeNode<K, V>)e.next;
e.next = null;
if ((e.hash & bit) == 0)
{
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
++lc;
}
else
{
if ((e.prev = hiTail) == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
++hc;
}
}
if (loHead != null)
{
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else
{
tab[index] = loHead;
if (hiHead != null) // (else is already treeified)
loHead.treeify(tab);
}
}
if (hiHead != null)
{
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else
{
tab[index + bit] = hiHead;
if (loHead != null)
hiHead.treeify(tab);
}
}
}
/* ------------------------------------------------------------ */
// Red-black tree methods, all adapted from CLR
static <K, V> TreeNode<K, V> rotateLeft(TreeNode<K, V> root, TreeNode<K, V> p)
{
TreeNode<K, V> r, pp, rl;
if (p != null && (r = p.right) != null)
{
if ((rl = p.right = r.left) != null)
rl.parent = p;
if ((pp = r.parent = p.parent) == null)
(root = r).red = false;
else if (pp.left == p)
pp.left = r;
else
pp.right = r;
r.left = p;
p.parent = r;
}
return root;
}
static <K, V> TreeNode<K, V> rotateRight(TreeNode<K, V> root, TreeNode<K, V> p)
{
TreeNode<K, V> l, pp, lr;
if (p != null && (l = p.left) != null)
{
if ((lr = p.left = l.right) != null)
lr.parent = p;
if ((pp = l.parent = p.parent) == null)
(root = l).red = false;
else if (pp.right == p)
pp.right = l;
else
pp.left = l;
l.right = p;
p.parent = l;
}
return root;
}
static <K, V> TreeNode<K, V> balanceInsertion(TreeNode<K, V> root, TreeNode<K, V> x)
{
x.red = true;
for (TreeNode<K, V> xp, xpp, xppl, xppr;;)
{
if ((xp = x.parent) == null)
{
x.red = false;
return x;
}
else if (!xp.red || (xpp = xp.parent) == null)
return root;
if (xp == (xppl = xpp.left))
{
if ((xppr = xpp.right) != null && xppr.red)
{
xppr.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else
{
if (x == xp.right)
{
root = rotateLeft(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null)
{
xp.red = false;
if (xpp != null)
{
xpp.red = true;
root = rotateRight(root, xpp);
}
}
}
}
else
{
if (xppl != null && xppl.red)
{
xppl.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else
{
if (x == xp.left)
{
root = rotateRight(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if (xp != null)
{
xp.red = false;
if (xpp != null)
{
xpp.red = true;
root = rotateLeft(root, xpp);
}
}
}
}
}
}
static <K, V> TreeNode<K, V> balanceDeletion(TreeNode<K, V> root, TreeNode<K, V> x)
{
for (TreeNode<K, V> xp, xpl, xpr;;)
{
if (x == null || x == root)
return root;
else if ((xp = x.parent) == null)
{
x.red = false;
return x;
}
else if (x.red)
{
x.red = false;
return root;
}
else if ((xpl = xp.left) == x)
{
if ((xpr = xp.right) != null && xpr.red)
{
xpr.red = false;
xp.red = true;
root = rotateLeft(root, xp);
xpr = (xp = x.parent) == null ? null : xp.right;
}
if (xpr == null)
x = xp;
else
{
TreeNode<K, V> sl = xpr.left, sr = xpr.right;
if ((sr == null || !sr.red) && (sl == null || !sl.red))
{
xpr.red = true;
x = xp;
}
else
{
if (sr == null || !sr.red)
{
if (sl != null)
sl.red = false;
xpr.red = true;
root = rotateRight(root, xpr);
xpr = (xp = x.parent) == null ? null : xp.right;
}
if (xpr != null)
{
xpr.red = (xp == null) ? false : xp.red;
if ((sr = xpr.right) != null)
sr.red = false;
}
if (xp != null)
{
xp.red = false;
root = rotateLeft(root, xp);
}
x = root;
}
}
}
else
{ // symmetric
if (xpl != null && xpl.red)
{
xpl.red = false;
xp.red = true;
root = rotateRight(root, xp);
xpl = (xp = x.parent) == null ? null : xp.left;
}
if (xpl == null)
x = xp;
else
{
TreeNode<K, V> sl = xpl.left, sr = xpl.right;
if ((sl == null || !sl.red) && (sr == null || !sr.red))
{
xpl.red = true;
x = xp;
}
else
{
if (sl == null || !sl.red)
{
if (sr != null)
sr.red = false;
xpl.red = true;
root = rotateLeft(root, xpl);
xpl = (xp = x.parent) == null ? null : xp.left;
}
if (xpl != null)
{
xpl.red = (xp == null) ? false : xp.red;
if ((sl = xpl.left) != null)
sl.red = false;
}
if (xp != null)
{
xp.red = false;
root = rotateRight(root, xp);
}
x = root;
}
}
}
}
}
/**
* Recursive invariant check
*/
static <K, V> boolean checkInvariants(TreeNode<K, V> t)
{
TreeNode<K, V> tp = t.parent, tl = t.left, tr = t.right, tb = t.prev, tn = (TreeNode<K, V>)t.next;
if (tb != null && tb.next != t)
return false;
if (tn != null && tn.prev != t)
return false;
if (tp != null && t != tp.left && t != tp.right)
return false;
if (tl != null && (tl.parent != t || tl.hash > t.hash))
return false;
if (tr != null && (tr.parent != t || tr.hash < t.hash))
return false;
if (t.red && tl != null && tl.red && tr != null && tr.red)
return false;
if (tl != null && !checkInvariants(tl))
return false;
if (tr != null && !checkInvariants(tr))
return false;
return true;
}
}
}